This blog post isn’t about ChatGPT. It isn’t about machine learning, neural nets, or any mysterious or border-line spiritual form of computing. That’s a whole ’nother set of philosophical and metaphysical conundrums (conundra?).

This is about a way people sometimes speak, informally, about bog-standard boring non-AI computers and computer programs. You’ve probably heard people speak this way. You’ve probably spoken this way sometimes yourself:

  • “The server thinks your password is wrong.”
  • “The computer thinks you’ve lost the connection.”
  • “The phone thinks you want to use your headphones. It’s wrong though.”

We normally interpret this as a metaphor, but I’m not sure it is. Is the phone “thinking” you want to use your headphones rather than your car speaker substantially different from us “thinking” our friend would rather get a phone call than a text message?

Part of the problem here is that the word “think” in English can mean different things.

It can mean to cognate, to go through a rational series of propositions in our brains, expressed as internalized speech in our mind’s ear or diagrams in our mind’s eye or pure abstractions. “I am thinking about how to approach this physics problem.” Computers probably cannot do this, and certainly are nowhere as good at it as humans are, not even with this fancy new AI software everyone’s playing with.

But it can also mean to have a belief, a mental model about reality. “I think Joe doesn’t like me very much.” Or, “I think the reason the car won’t start is because the battery is dead.” Computers, I will argue, can do something remarkably similar to humans in this category.

Some languages distinguish these two meanings of “think.” English learners of German often say denken (to cognate), when they mean glauben (to believe), in contexts where both would translate as “to think.” And then, in case that was too simple, there’s also meinen, which means “to suppose” or “to opine,” also used when English speakers might say “to think.”

So here’s my thought on this, or rather, my opinion (meine Meinung):

Computers cannot yet denken, or cognate, like humans. But computers can definitely glauben, or internally believe, specific facts, and they’ve been able to do that since the day they were invented.

In order to figure out whether this is true, we first need to establish what it means to believe something, and then see if computers can do it. What does it mean for humans to think something, to believe something about the world? Can we extract a definition that can then be applied to computers, to see whether computers are capable of the same thing?

So, what does it mean for us to think something is true? Well, it means that we have some internal state, some internal information stored in the physical arrangement of our brains, that corresponds to that thought or belief. We then use that internal state to inform our behavior. If we think our friend would rather get a phone call than a text message, then we might choose to accomodate that and call them instead of texting them.

This internal state, when all is going well, corresponds to a specific external reality. The goal is for the internal state to match the external reality. Sometimes this goal is not met – sometimes we misapprehend the situation, our belief is wrong, or what we think is true is not true. But if we are wrong, we have the same internal state as we would have if we were right, and things were working.

We can therefore define believing or thinking that a proposition X is true thus:

A being believes X is true if they have an internal state that, when the being is functioning correctly, corresponds to X being true, that then informs their behavior such that it is the behavior that makes sense if X is true, rather than the behavior that makes sense if X is not true.

Applied to phone example, we have some internal state in our brain that indicates that “Jill would rather get a call than a text.” How do we know that the state indicates that proposition? Well, we know that when our brains are functioning correctly (a hard thing to define, but also a concept everyone uses all the time), we only have that internal state when the proposition is true. And, we also know that this internal state drives behavior consistent with that proposition being true. Assuming we want to accommodate Jill’s preference, we will call her instead of texting her, an adaptive decision if the belief is true, and a non-adaptive one if the belief is false.

With this framework, it seems almost easier to establish that computers can think something is true than that humans can do this. Humans often have complicated, ambivalent beliefs and thoughts. Humans will often believe something for reasons other than an efficient assessment of its truth value, and act contrary to their own earnestly held beliefs. I think this definition still works for humans, if you take all the confounding factors into consideration, but it’s hard: We get into things like “conscious” or “subconscious” beliefs, or “he says he thinks X, but his actions show he really thinks Y.” And, of course, it’s extremely difficult to define whether a human is “functioning correctly.”

With computers, however, they think all sorts of things. For example, let’s talk about whether a computer thinks a user has administrator privileges. You might see code like this:

let has_admin_privileges: bool = is_admin(conn.get_current_user());

Now, we have an internal state in the computer, a boolean (i.e. true or false) variable that is intended to correspond to whether the user has administrator privileges. If the code is functioning correctly, this variable will take on the value true. We know this, because the definition of “functioning correctly” is implicit in the way the programmer wrote the code, and how they named the variable.

Furthermore, the following lines of code are almost certainly behaviors in line with that interpretation of the internal state.

if has_admin_privileges {
    // Do the thing

    // Signal success
} else {
    // Signal an error

So, when people say things like “my phone thinks I want to use my Bluetooth headphones,” it means that there is information encoded in the silicon of the phone, possibly in an explicitly-named variable, that corresponds to that belief.

So now that I’ve thought this through properly, I don’t even think statements like this are metaphorical. I think they are literally true, and completely appropriate.