What is a Symbol System?

Mitsuharu Hadeishi well!mitsu at apple.com
Mon Nov 20 22:31:20 EST 1989


This is an interesting question.  First of all, I think it is clear that
since a recurrent neural network can emulate any finite-state automaton
that they are Turing equivalent goes almost without saying, so it is
also clear that recurrent NNs should be capable of the symbolic-level
processing of which you speak.

First of all, however, I'd like to address the symbolist point of view that
higher-level cognition is purely symbolic, irrespective of the implementation
scheme.  I submit this is patently absurd.  Symbolic representations of
thought are simply models of how we think, and quite crude models at that.
They happen to have several redeeming qualities however, among them
that they are simple, well-defined, and easy to manipulate.

However, in truth, though it is clear that many operations (such as
syntactic analysis of language) operate within the structure, at least
in part, of symbolic processing, others go outside (such as understanding
a subtle poem).  In addition, there are many other forms of higher-level
cognition, such as that which visual artists engage themselves in, which
do not easily lend themselves to symbolic decomposition.  I submit that
even everyday actions and thoughts do not follow any strict symbolic
decomposition, though to some degree of approximation they can be
modelled *as though* they were following rules of some kind.

I think the comparison between rule-based and analog systems is apt;
however, in my opinion it is the analog systems which have the greater
flexibility, or one might say economy of expression.  That is to say,
inasmuch as one can emulate one with the other they are equivalent,
but given limitations on complexity and size I think it is clear the
complex analog dynamical systems have the edge.

The fact is that as a model for the world or how we think rule-based
representations are sorely lacking.  It is similar to trying to
paint a landscape using polygons; one can do it, but
it is not particularly well-suited for the task, except in very
simple situations (or situations where the landscape happens to be
man-made.)

We should not confuse the map with the territory.  Just because we
happen to have this crude model for thinking, i.e., the symbolic
model, does not mean that is *how* we think.  We may even describe
our decisions this way, but the intractability of AI problems except
for very limited-domain applications indicates or suggests the
weaknesses with our model.  For example, natural language systems
only work with extremely limited context.  The fact that they do work
at all is evidence that our symbolic models are not completely inadequate,
however, that they are limited in domain suggests they are nonetheless
mere approximations.  Connectionist models, I believe, have much greater
chance at capturing the true complexity of cognitive systems.

In addition, the recent introduction of fuzzy reasoning and
nonmonotonic logic are extensions of the symbolic model which
certainly improve the situation, but also point out the main weaknesses
with symbolic models of cognition.  Symbolic models address only
one aspect of the thinking process, perhaps not even the most important
part.  For example, a master chess player typically only considers
about a hundred possible moves, yet can beat a computer program that
considers tens of thousands of moves.  The intractability of even more
difficult problems than chess also points this out.  Before the symbolic
engine can be put into action, a great deal of pre-processing
goes on which will likely not be best described in symbolic terms.

Mitsu Hadeishi
Open Mind
16110 S. Western Avenue
Gardena, CA 90247
(213) 532-1654
(213) 327-4994
mitsu at well.sf.ca.us


More information about the Connectionists mailing list