robots and simulation

Stevan Harnad harnad at Princeton.EDU
Sat Jan 27 16:54:30 EST 1990


Alexis Manaster-Ramer AMR at ibm.com wrote:

> I know of no... attempt [to make sense of "content" or
> "intentionality"] that does not either (a) work equally well for robots
> as it does for human beings (e.g., Harnad's proposals about
> "grounding")... [G]rounding as I understand Harnad's proposal is
> not... intended to [to distinguish biological systems
> from robots]. I think he contends that robots are just as grounded as
> people are, but that disembodied (non-robotic) programs are not.

You're quite right that the grounding proposal (in "The Symbol
Grounding Problem," Physica D 1990, in press) does not distinguish
robots from biological systems -- because biological systems ARE robots
of a special kind. That's why I've called this position "robotic
functionalism" (in opposition to "symbolic functionalism").

But you leave out a crucial distinction that I DO make, over and over:
that between ordinary, dumb robots, and those that have the capacity to
pass the Total Turing Test [TTT] (i.e., perform and behave in the world
for a lifetime indistinguishably from the way we do). Grounding is
trivial without TTT-power. And the difference is like night and day.
(And being a methodological epiphenomenalist, I think that's about as
much as you can say about "content" or "intentionality.")

> Given what we know from the theory of computation, even though
> grounding is necessary, it does not follow that there is any useful
> theoretical difference between a program simulating the interaction
> of a being with an environment and a robot interacting with a real
> environment.

As explained quite explicitly in "Minds, Machines and Searle" (J. Exp.
Theor. A.I. 1(1), 1989), there is indeed no "theoretical difference," in
that all the INFORMATION is there in a simulation, but there is another
difference (and this applies only to TTT-scale robots), one that
doesn't seem to be given full justice by calling it merely a
"practical" difference, namely, that simulated minds can no more think
than simulated planes can fly or simulated fires can burn.

And don't forget that TTT-scale simulations have to contend with the
problem of encoding all the possible real-world contingencies a
TTT-scale robot would be able to handle, and how; a lot to pack into a
pure symbol cruncher...

Stevan Harnad



More information about the Connectionists mailing list