Symbol Manipulation: Scope and Limits

Stevan Harnad harnad at clarity.Princeton.EDU
Sat Oct 7 13:07:51 EDT 1989


Since Richard Yee <YEE at cs.umass.EDU> has let the cat out of the bag
with his posting (I was hoping for more replies about whether the
community considered there to be any essential difference between
parallel implementations and serial simulations of neural nets before
I revealed why I had posted my query): I've proposed a variant of
Searle's Chinese Room Argument (which in its original form I take to be
decisive in showing that you can't implement a mind with a just a pure
symbol manipulating system) to which nets are vulnerable only if there
is no essential difference between a serial simulation and a parallel
implementation. That having been said, the variant is obvious, and I
leave it to you as an exercise. Here's my reply to Yee, who wrote:

> The real question that should be asked is NOT whether [Searle], in
> following the rules, understands the Chinese characters, (clearly
> he does not), but whether [Searle] would understand the Chinese
> characters if HIS NEURONS were the ones implementing the rules and he
> were experiencing the results. In other words, the rules may or may not
> DESCRIBE A PROCESS sufficient for figuring out what the Chinese
> characters mean.

This may be the real question, but it's not the one Searle's answering
in the negative. In the Chinese room there's only symbol manipulation
going on. No person or "system" -- no "subject" -- is understanding.
This means symbol manipulation alone is not sufficient to IMPLEMENT
the process of understanding, any more than it can implement the
process of flying. Now whether it can DESCRIBE rather than implement it
is an entirely different question. I happen to see no reason why all
features of a process that was sufficient to implement understanding
(in neurons, say) or flying (in real airplane parts) couldn't be
successfully described and pretested through symbolic simulation. But
Searle has simply shown that pure symbol manipulation ITSELF cannot be
the process that will successfully implement understanding (or
flying). (Ditto now for PDP systems, if parallel implementations and
serial simulations are equivalent or equipotent.)

> I agree that looking at the I/O behavior outside of the room is not
> sufficient...

This seems to give up the Turing Test (Searle would shout "Bravo!").
But now Yee seems to do an about-face in the direction of resurrecting the
strained efforts of the AI community to show that formal symbol
manipulating rule systems have not just form but content after all, and
CAN understand:

> The determination of outputs is under the complete control of the
> rules, not [Searle]. [Searle] has no free will on this point (as he
> does in answering English inputs)... although it is clearly true that
> (Chinese characters) have form but absolutely no content for the
> person... [w]hether or not the *content* of this symbol is recognized,
> is determined by the rules... the Chinese symbols were indeed correctly
> recognized for their CONTENT, and this happened WITHIN the room...
> the process of understanding Chinese is [indeed] occurring.

NB: No longer described or simulated, as above, but actually OCCURRING.
I ask only: Where/what are these putative contents (I see only formal
symbols); and who/what is the subject of this putative understanding (I
see only Searle), and would he/she/it care to join in this discussion?

Now in my case this glibness is really a reflection of my belief that
the Turing Test couldn't be successfully passed by a pure symbol
manipulator in the first place (and hence that this whole sci-fi
scenario is just a counterfactual fantasy) because of the symbol
grounding problem. But Yee -- though skeptical about the Turing Test
and seemingly acknowledging the simulation/implemetation distinction --
does not seem to be entirely of one mind on this matter...

> [The problem is] a failure to distinguish between a generic Turing
> Machine (TM) and one that is programmable, a Universal Turing Machine
> (UTM)... If T, as a parameter of U, is held constant, then y = T(x) =
> U(x), but this still doesn't mean that U "experiences x" the same way T
> does. The rules that the person is following are, in fact, a program
> for Chinese I/O... I take my own understanding of English (and a little
> Chinese) as an existence proof that [Understanding is Computable]

"Cogito Ergo Sum T"? -- Descartes would doubt it... I don't know what
Yee means by a "T," but if it's just a pure symbol-cruncher, Searle has
shown that it does not cogitate (or "experience"). If T's something more
than a pure symbol-cruncher, all bets are off, and you've changed the
subject.

Stevan Harnad

References:

Searle, J. (1980) Minds, Brains and Programs. Behavioral and Brain
           Sciences 3: 417 - 457.

Harnad, S. (1989) Minds, Machines and Searle. Journal of Experimental
           and Theoretical Artificial Intelligence 1: 5 - 25.

Harnad, S. (1990) The Symbol Grounding Problem. Physica D (in press)


More information about the Connectionists mailing list