Subcognition and the Limits of the Turing Test
Bob French
french at cogsci.indiana.edu
Tue May 23 12:41:17 EDT 1989
A pre-print of an article on subcognition and the Turing Test
to appear in MIND:
"Subcognition and the Limits of the Turing Test"
Robert M. French
Center for Research on Concepts and Cognition
Indiana University
Ostensibly a philosophy paper (to appear in MIND at the end
of this year), this article is of special interest to connectionists.
It argues that:
i) as a REAL test for intelligence, the Turing Test is
inappropriate in spite of arguments by some philosophers to the
contrary;
ii) only machines that have experienced the world as we
have could pass the Test. This means that such machines would
have to learn about the world in approximately the same way that we
humans have -- by falling off bicycles, crossing streets, smelling
sewage, tasting strawberries, etc. This is not a statement about
the inherent inability of a computer to achieve intelligence, it
is rather a comment about the use of the Turing Test as a means of
testing for that intelligence;
iii) (especially for connectionists) the physical,
subcognitive and cognitive levels are INEXTRICABLY interwoven and it
is impossible to tease them apart. This is ultimately the reason why
no machine that had not experienced the world as we had could ever
pass the Turing Test.
The heart of the discussion of these issues revolves around
humans' use of a vast associative network of concepts that operates,
for the most part, below cognitive perceptual thresholds and that
has been acquired over a lifetime of experience with the world. The
Turing Test tests for the presence or absence of this HUMAN
associative concept network, which explains why it would be so
difficult -- although not theoretically impossible -- for any machine
to pass the Test. This paper shows how a clever interrogator could
always "peek behind the screen" to unmask a computer that had not
experienced the world as we had by exploiting human abilities based
on the use of this vast associative concept network, for example, our
abilities to analogize and to categorize;
This paper is short and non-technical but nevertheless focuses
on issues that are of significant philosophical importance to AI
researchers, and to connectionists in particular.
If you would like a copy, please send your name and address
to:
Helga Keller
C.R.C.C.
510 North Fess
Bloomington, Indiana 47401
or send an e-mail request to helga at cogsci.indiana.edu
- Bob French
french at cogsci.indiana.edu
More information about the Connectionists
mailing list