Philosophers sometimes criticize simplistic interpretations of science, by pointing out that our conception of objective knowledge is flawed.

Put it this way: you're being most subjective when you're just venting your emotions, and you're at your most objective when you're working around your bias or ignoring your wishes and siding instead with logic and with the empirical evidence.

That's fair enough.

But the more objective you're being, or the more you're letting the facts in the world speak for themselves, the less you're understanding what you're talking about.

Thus, a paradigm of objectivity would be a machine learning program, such as the kind that's used in so-called artificial intelligence:

Machine learning starts with data — numbers, photos, or text, like bank transactions, pictures of people or even bakery items, repair records, time series data from sensors, or sales reports. The data is gathered and prepared to be used as training data, or [as] the information the machine learning model will be trained on…

From there, programmers choose a machine learning model to use, supply the data, and let the computer model train itself to find patterns or make predictions. Over time the human programmer can also tweak the model, including changing its parameters, to help push it toward more accurate results…

Some data is held out from the training data to be used as evaluation data, which tests how accurate the machine learning model is when it is shown new data. The result is a model that can be used in the future with different sets of data.

Do you see, then, how machine learning works? You have an algorithm that learns to identify patterns in large data sets.

But does the algorithm understand any of the patterns? Is this kind of computer program intelligent? No, because there's a big difference between tracking patterns and understanding them.

A thermometer, too, tracks patterns since as the temperature rises, so does the thermometer's indicator. But a thermometer doesn't understand temperature or the weather. Rather, the thermometer is the result of engineers' coupling of one system with another. Similarly, computer programmers write code that learns to couple itself with results which the programmers want, such as the sorting of certain kinds of information. But understanding is more than just this coupling of systems or categorizing of data.

What's the difference between tracking patterns and understanding them?

For one thing, tracking patterns is passive. You can track information with a map since the map displays signs of places, using an analogy between the map and the territory. But the map by itself can do nothing about the territory. The map just displays the information that's coupled with the mapped area by way of that analogy, or by way of that compression of data in the map's representation of the terrain.

By contrast, understanding patterns is active; indeed, I'd go as far as to say that it's aggressive. The figure of speech that's at the root meaning of "understand" is instructive in this case since to stand under something is to achieve an advantage in that you're no longer in the thing's line of sight, as it were, or you're no longer subject to being caught unaware with your back towards the thing. No, you've studied the subject matter and now you've positioned yourself to exploit that knowledge, hiding underneath the thing, ready to pounce with your know-how.

True, a more felicitous image would be to stand over the thing, giving you the benefit of high ground, which would make for the word "overstand." But the point of the word "understand," I take it, is that at least you're not standing there like a deer in the headlights, about to be run over due to ignorance or hapless fear.

To understand something isn't just to have a mental map that tracks correlations or that measures quantities. No, to understand it, you must be driven to use that map to master the terrain. You must be an embodied lifeform, the mental map being implemented by a brain that's hardwired to steer the body with pleasures, pains, and goals. You must be motivated to use your tracking ability to get out of the environment's way or to terraform it, constructing an artificial domain that's less alien and threatening than the wilderness.

Thus, a thermometer may be perfectly objective in tracking the ambient temperature since this device is causally meshed with the environment so that an alteration of the latter triggers an indicator in the former. There's no subjective freedom in this device to decide how to respond to the stimuli. Consequently, the thermometer is passively related to its environment. The thermometer's reader adjustment is solely an effect of changes in the ambient temperature.

But lodge a temperature gauge in an organism that wants the temperature to fall within a desired range and that's equipped to do something about it when it's too cold or hot outside, and you have the makings not just of measuring temperature but of understanding what temperature means, as in understanding what it means for organisms of this type. To understand temperature is to assimilate this physical pattern to a set of instincts, or to a worldview or a culture that guides the organism in its negotiation with its environment.

For instance, to understand temperature you must deem some temperatures to be good and others to be bad. And you've got to be free and driven enough to alter or to move away from bad temperatures.

What is it to deem certain temperatures to be bad? It's not just a matter of adjusting yet another indictor, or of rendering just another calculation or measurement. You can set a thermostat to keep a room's temperature within a desired range. But the thermostat doesn't deplore the temperatures outside that range, regardless of how the device may colour-code the various temperatures.

The person who owns and sets the thermostat makes that value judgment, though, and the colour coding is for that person's benefit. What, then, is the difference between the person's judgment and the thermostat's measurement?

The crucial difference, I think, is that the person who understands temperature is forced to take up an existential stance towards a foreign environment, just in virtue of being alive, clever, and imaginative enough to deem life (and especially this person's life) to be precious, as in better in some respects than the rest of the wilderness. When we say that excessively hot or cold temperatures are bad, what we mean is that they threaten our way of life, so the value judgment expresses our self-awareness and the primal evaluation that we matter as conscious beings in a physical, amoral, impersonal, mostly lifeless cosmos.

Assuming, then, that Big Tech companies won't end up just defrauding the public, algorithms will eventually be deemed intelligent and will receive the relevant legal rights and responsibilities when they can do more than track quantities like a thermometer. The algorithms will have to be implemented in bodies that care about things, and that are poised to use their pattern detectors to reshape the world in their image.

To the extent that algorithms aren't yet in that position because they don't understand the information they're organizing, stealing, and repackaging, we can refer to their present condition to illustrate how the objective gathering, measuring, and packaging of data differ from the subjective process of understanding.

Moreover, we can point out that scientific models might similarly map and measure systems without necessarily including the most subjective, often implicitly philosophical and intuitive conceptions that are needed for understanding and for motivating behavioural responses to what's mapped.

Still, to the extent that scientific objectivity is pragmatic in breaking down the total causes of events to manageable systems, levels, and cycles, scientists are less objective than thermometers and machine learning programs since scientific models are implicitly aggressive in reducing and enslaving nature for our species' benefit.