Last February, the IBM supercomputer, Watson, won an exhibition game of the TV show “Jeopardy!” against two of its best contestants.
This was a significant advance on Deep Blue, another IBM supercomputer, which had defeated the six-time world chess champion Gary Kasparov in 1997.

Watson’s performance was hailed in the popular media as heralding the “triumph of machine intelligence over the human.”

Of course, it was nothing of the sort.

It was the triumph of a top team of human researchers at IBM, aided by hundreds of others from many of the leading technological universities in the United States, who had programmed Watson over five years and at the cost of $3 million.

In my lastcolumn, I referred to evolutionary psychologist guru Steven Pinker’s claim that the human mind is a “system of organs of computation designed by natural selection to solve the problems faced by our natural ancestors.”

In this “computational theory of the mind,” the latter is treated as a set of computer programs or “modules” that are being executed in the electrical wiring (“hardware”) of your brain even as you read this page.

Linked to this is the key assumption that what the mind-brain essentially does is “process information,” and this is usually understood as the manipulation of symbols by rules or algorithms.

By using a common terminology (e.g. “information,” “intelligence,” “neural networks”) when discussing minds, brains and computers, the human-machine barrier is easily straddled.

The mind is both naturalized and computerized. And the brain can now be described as an incredibly powerful microprocessor, the mother of all motherboards.

It requires a certain philosophical sophistication to see through the sleights of hand that end up reducing human minds and persons to bundles of neural activity in the brains.

As Ludwig Wittgenstein famously put it, when looking back on the naive philosophy of science (“logical positivism”) that had once seduced him in the 1920s: “A picture held us captive. And we could not get outside it, for it lay in our language and language seemed to repeat it to us inexorably” (Philosophical Investigations, 1953).

There is nothing new in the way scientists take the most advanced machines of their day as models or analogies for human functioning. Steam engines and telegraph systems have served this purpose before.

But there is a short (though calamitous) step from modeling to identification. We then imagine that machines that help us perform certain functions have those functions themselves.

We humanize the machines even as we mechanize humans.

When we speak of “clocks telling the time,” what we mean is just that they enable us (conscious human persons) to tell the time.

Walking sticks don’t actually walk, and running shoes don’t run. The same applies to “radar searching for aircraft,” “telescopes discovering black holes” or “smart phones remembering our appointments”: they do not literally search, discover or remember.

If there were no conscious human persons using these prosthetic tools, these activities would not happen.

In one of the most cited philosophical papers of recent decades (“Minds, Brains, and Programs,” 1980), John Searle invited us to imagine somebody totally ignorant of Chinese seated in a closed room and receiving inputs of Chinese symbols.

He is also given a rule-book for processing these symbols, so he can manipulate them and produce an output.

Suppose that the input of Chinese is in the form of questions. It would appear, then, from the output symbols that the person in the room was answering the questions. However, he has not understood anything that was passing through his hands.

Searle used this analogy to argue that electrical flows in computers do not count as the processing of symbols because symbols are symbols only to those who understand them as symbols.

It is wrong to imagine the mind as analogous to a super-computer because in the absence of minds computers do not do what minds do.

In our IT-obsessed age, not only is information confused with knowledge, but the special engineering use of the term is confused with meaning.

A meaningful message may actually have less information (from a technical point of view) than a sentence made up of pure gibberish.

It is all a matter of the range of alternatives from which the message is selected and their prior probabilities.

As Claude Shannon, a pioneer of the mathematical theory of communication, reminds us, the “semantic aspects of communication are irrelevant to the engineering aspects.”

In a loose sense of “inform,” the books (and the blog) I write may be said to be “filled with information” and stored in print (or on the internet) indefinitely.

However, it is strictly only potential information that can be inscribed and stored outside a conscious mind.

Once the concepts of information, informing and being informed start to be liberated from a conscious someone being informed or intending to inform, language goes on holiday (another Wittgensteinian expression) and reason disappears.

Distinguishing person-talk from neuro-talk, and neuro-talk from computer-talk, are indispensable if we are to explore the distinctively human and rescue the humanities and human sciences from withering away.

VinothRamachandra is secretary for dialogue and social engagement for the International Fellowship of Evangelical Students. He lives in Sri Lanka. A version of this column first appeared on his blog.

Share This