robots and simulation

AMR@IBM.COM AMR at IBM.COM
Sun Jan 28 23:25:02 EST 1990


(1) I am glad that SOME issues are getting clarified, to wit, I hope
that everybody that has been confusing Harnad's arguments about
grounding with other people's (perhaps Searle or Lakoff's) arguments
about content, intentionality, and/or semantics will finally accept
that there is a difference.

(2) I omitted to refer to Harnad's distinction between ordinary robots,
ones that fail the Total Turing Test, and and the theoretical ones that
do pass the TTT, for two unrelated reasons.  One was that I was not trying
to present a complete accoun, merely, to raise certain issues, clarify
certain points, and answer certain objections that had arisen.  The other
was that I do not agree with Harnad on this issue, and that for a number
of reasons.  First, I believe that a Searlean argument is still possible
even for a robot that passes the TTT.  Two, the TTT is much too strong
since no one human being can pass it for another, and we would not be
surprised I think to find an intelligent species of Martians or what
have you that would, obviously, fail abysmally on the TTT but might pass
a suitable version of the ordinary Turing Test.  Third, the TTT is still
a criterion of equivalence that is based exclusively on I/O, and I keep
pointing out that that is not the right basis for judging whether two
systems are equivalent (I won't belabor this last point, because that is
the main thing that I have to say that is new, and I would hope to address
it in detail in the near future, assuming there is interest in it out
there.)

(3) Likewise, I omitted to refer to Harnad's position on simulation because
(a) I thought I could get away with it and (b) because I do not agree with
that one either.  The reason I disagree is that I regard simulation of a
system X by a system Y as a situation in which system Y is VIEWED by an
investigator as sufficiently like X with respect to a certain (usually
very specific and limited) characteristic to be a useful model of X.  In
other words, the simulation is something which in no sense does what
the original thing does.  However, a hypothetical program (like the one
presupposed by Searle in his Chinese room argument) that uses Chinese
like a native speaker to engage in a conversation that its interlocutor
finds meaningful and satisfying would be doing more than simulating the
linguistic and conversational abilities of a human Chinese speaker; it
would actually be duplicating these.  In addition--and perhaps this
is even more important--the use of the term simulation with respect to
an observable, external behavior (I/O behavior again) is one thing,
its use with reference to nonobservable stuff like thought, feeling,
or intelligence is quite another.  Thus, we know what it would mean
to duplicate (i.e., simulate to perfection) the use of a human language;
we do not know what it would mean to duplicate (or even simulate partially)
something that is not observable like thought or intelligence or feeling.
That in fact is precisely the open question.  And, again, it seems to
me that the relevant issue here is what notion of equivalence we employ.
In a nutshell, the point is that everybody (incl. Harnad) seems to be
operating with notions of equivalence that are based on I/O behavior
even though everybody would, I hope, agree that the phenomenon we
call intelligence (likewise thought, feeling, consciousness) are NOT
definable in I/O terms.  That is, I am assuming here that "everybody"
has accepted the implications of Searle's argument at least to the
extent that IF A PROGRAM BEHAVES LIKE A HUMAN BEING, IT NEED NOT
FOLLOW THAT IT THINKS, FEELS, ETC., LIKE ONE.  Searle, of course,
goes further (without I think any justification) to contend that IF A
A PROGRAM BEHAVES LIKE A HUMAN BEING, IT IS NOT POSSIBLE THAT IT
THINKS, FEELS, ETC., LIKE ONE.  The question that no one has been
able to answer though is, if the two behave the same, in what sense
are they not equivalent, and that, of course, is where we need to
insist that we are no longer talking about I/O equivalence.  This
is, of course, where Turing (working in the heyday of behaviorism)
made his mistake in proposing the Turing Test.


More information about the Connectionists mailing list