Friday, March 17, 2006

math, the Chinese room, and neural imaging

Searle's Chinese Room experiment (1980) has long been one of the key responses to the Turing Test's behaviorist approach to determining machine intelligence. Turing's claim (rather simplified) is that computers will one day be built which, in the context of a written exchange, will be able to hold up their end of a general conversation so well that an observer will not be able to distinguish the computer's responses from a human's. Turing claimed that this human-like behavior was sufficient for us to call the machine intelligent. The question of whether the machine was actually thinking, he said, was "too meaningless to deserve discussion". Since all we can see of other's minds is the result of their thinking, not their actual thinking, we must content ourselves with defining the behavior we will consider to be evidence of thought (or intelligence) and then ascribe that state to anything that exhibits the required behavior.

The Chinese Room experiment is a thought experiment meant to clarify the difference between acting like one is thinking and actually thinking. Searle said, imagine you are sealed in a room and are corresponding with those on the outside via pieces of paper on which are printed Chinese character. You do not speak, read, or in anyone understand Chinese. However, you have rulebook that tells you, given a particular input (a set of characters sent to you from outside - or the whole series of exchanges), how to compose a correct responding note, again using the Chinese characters. To an outside observer, the exchange would be indistinguishable from a dialog among native Chinese speakers. Yet, for you, the experience would be very different than if the correspondance were in a language you understood - in this case you would be acting as a computer, with the rule book as your program. The process and the experience of rote rule following is very different than of conscious thinking.

That difference happens frequently, if in a less defined way, in learning. Watch a child learn to do division - there is a stage where they have a set of rules they following about how to manipulate the digits - and then a breakthrough where they understand what it means to divide and why the digits are where they are. Perhaps there is some math learning analogy to the Peter Principle, one that says that people learn math until they reach the point where they can learn by rote, but that conversion to understanding does not happen. For some, it is division, for others calculus, or set theory... Another parallel might be the social learning ability of people with autism... Or, for those for whom social situations are intuitive, the difference between solving logic problems and social contract problems: Cosmides and Tooby did an interesting experiment showing that people had a hard time solving problems when they were presented as abstract logic problems, but were quite successful when they were reframed as social contract problems. There is growing evidence that specialized parts of the brain handle tasks such as social contracts and that the different sensation (and result) of doing the structurally identical logic and social problems is because they are processed by different parts of the brain.

Philosophically, this would be evidence that the structure of the brain, the underlying "machine" that does the computation, is essentially important. For Turing, who also contributed the mathematical foundation of a universal computing machine, would this be evidence that the brain was not, in that sense, equivalent?



free webpage hit counter