Turing machines = connectionist models (?)

AMR@IBM.COM AMR at IBM.COM
Thu Jan 25 20:36:20 EST 1990


Having started the whole debate about the Turing equivalence of
connectionist models, I feel grateful to the many contributors to
the ensuing debate.  I also feel compelled to point out that,
in view of the obvious confusion and disunity concerning what ought
to be a simple mathematical question, somebody needs to try to set
the record straight.  I am sure this will take some time and the
efforts of many, but let me try to start the ball rolling.

(1) George Lakoff's comment about the irrelevance of this issue in
light of the fact that it does not address the question of "content"
esp. of natural language concepts bothers me because all the talk
about "content" (alias "intentionality", I guess) is so much hand-
waving in the absence of any hint (not to mention a full-blown account)
of what this is supposed to be.  If we grant that there is no more to
human beings that mortal flesh, then there is no currently available
basis in any empirical science, any branch of mathematics, or I suspect
(but am not sure) any branch of philosophy for such a concept.  All
we can say is that, in virtue of the architecture of human beings,
certain causal connections exist (or tend to exist, to be precise, for
there are always abnormal cases such as perhaps autism or worse) between
certain states of an environment and certain states (as well as certain
external actions) of human beings in that environment.  There is no
magic in this, no soul, and nothing that distinguishes human beings
crucially from other living beings or from machines.  Perhaps that is
wrong but it is enough to speculate about something like "content" or
"intentionality".  One has to try to make sense of it, and I know of
no such attempt that does not either (a) work equally well for robots
as it does for human beings (e.g., Harnad's proposals about "grounding")
or (b) fail to show how the factors they are talking about might be
relevant (e.g., Searle's suggestion that it might be crucial that
human beings are biological in nature.  The answer surely is that
this might be crucial, but that Searle has failed not only that it
is but even how it might be).  I would argue that in order to understand
how human beings function, we need to take the environment into account
but the same applies to all animals and to many non-biological systems,
including robots.  So, while grounding in some sense may be necessary,
it is not sufficient to explain anything about the uniqueness of human
mentality and behavior (e.g., natural language).

(2) Given what we know from the theory of computation, even though
grounding is necessary, it does not follow that there is any useful
theoretical difference between a program simulating the interaction
of a being with an environment and a robot interacting with a real
environment.  In practice, of course, the task of simulating a realistic
environment may be so complex that it would make more sense in certain
cases to build the robot and let it run wild than to attempt such
a simulation, but in other cases the cost and complexity of building
the robot as opposed to writing the program are such that it is more
reasonable to do it the other way around.  In real research, both
strategies must be used, and it should be obvious that the same
reasoning shows that, as far as we know, there is no theoretical
difference between a human being in a real environment and a human
being (or a piece of one, such as the proverbial brain in the vat)
in a suitably simulated environment, but that there may be tremendous
practical advantages to one or the other approach depending on the
particular problem we are studying.  But, again, "grounding" does
not allow us differentiate biological from nonbiological or human
from nonhuman.


(2) In light of the above, it seems to me that, while the classical
tools provided by the theory of computation may not be enough, they
are the best that we have got in the way of tools for making sense of
the issues.

(2) There is some confusion about Turing machines, which possess
infinite memory, and physically realizable machines, which do not.
This makes a lot of difference in one way because the real lesson
of the theory of computation is not that, if human beings are
algorithmic beings, then they are equivalent to Turing machines,
but rather that they would be equivalent to finite-state machines.
The same applies to physically realized computers and any physically
realized connectionist hardware that anyone might care to assemble.


More information about the Connectionists mailing list