Symbol systems vs AI systems

Bruce Krulwich krulwich at zowie.ils.nwu.edu
Tue Nov 21 16:12:04 EST 1989


I'm not sure if this is a good idea, but I'm going to throw in some
thoughts to the "symbol system" question.  Please note that I'm responding
to the CONNECTIONISTS list and do not wish my message to be forwarded to
any public newsgroups.

There are two implicit assumptions that I think Steve is making in his
message, especially given its being sent to CONNECTIONISTS.  The first is
that these characteristics of a "symbol system" do not apply to
connectionist nets, and the second is that the "symbol systems" that he
characterizes are in fact all classical (non-connectionist) AI systems.
Even if he's not making these two assumptions, I think that others do so
I'm going to go ahead and discuss them.


First, the assumption that Steve's characterizations of "symbol systems" do
not apply to neural nets.  Looking at the 8 aspects of the definition, I
think that each of them apply to NN's as much as they apply to many symbol
systems.  In other words, they apply to symbol systems if you look at
symbol systems purely syntactically and ignore the meaning and theory that
goes into the system.  The same is true about NN's.  From an anally
syntactic point of view, neural nets are simply doing transformations on
sets of unit values.  (I'm not restricting this to feed-forward nets like
many people do.  This really is the case about all nets.)  They have very
specific rules about combining values, in fact, all the tokens (units) in
the system use the same rule on different inputs.

Clearly this view of NN's is missing the forest for the trees, because the
point of NN's is the semantics of the computation they engage in and of the
information they encode.  My claim, however, is that the same is true of
all AI systems.  Looking at them as syntactic "symbol systems" is missing
their entire point, and is missing what differentiates them from other
systems.


This leads me to the assumption that all classical AI systems are "symbol
systems" as defined.  I think that this is less true than my claim above
about connectionist nets.  Let's look, for example, at a research area
called "case-based reasoning."  (For those unfamiliar with CBR, take a look
at the book "Inside Case-Based Reasoning" by Riesbeck and Schank, published
by LEA, or the proceedings of the past two case-based reasoning workshops
published by Morgan Kaufman.)  The basic idea in CBR is that problems are
solved by making analogies to previously solved problems.  The assumption
is that general rules are impossible to get because there is often not
enough information to generalize from the instances that an agent gets.
Looking at Steve's characterizations of a "symbol system," we can see that
CBR systems have (a) no explicit rules, and (b) completely semantic
matching (in most cases) that is not dependant on the "shape" of the
representations.

Certainly there is a level at which CBR systems are "symbol systems" in the
same way that all computer programs are inherently "symbol systems."  The
point, however, is that this is _not_ the issue in CBR systems just like
its not the issue in connectionist models.  Since the _theory_ embedded in
CBR systems is irrespective of several of the characterizations of "symbol
systems," they are only "symbol systems" in the way that all connectionist
models are "symbol systems" because they are all simulated on computers.

I have used CBR as an example here, but the same could be said about alot
of the recent work in analogical reasoning, analytical learning (such as
EBL), default reasoning, and much of the rest of the semantically oriented
AI systems.  My claim is that one of two things is the case:

        (1) Much of the current work in classical AI does not fall into 
            what Steve has characterized as "symbol systems," or
        (2) Connectionist nets _do_ fall into this catagory.

It doesn't really matter which of these is the case, because each of them
makes the characterization useless as a characterization of AI systems.


I'd like to close by apologizing to CONNECTIONISTS readers if this post
starts or continues a bad trend on the mailing list.  The last thing that
anyone wants is for CONNECTIONISTS to mimic COMP.AI.  I've tried to keep my
points to ones that address the assumptions that alot of connectionist
research makes in the hope of keeping this from blowing up too much.



Bruce Krulwich
Institute for the Learning Sciences
krulwich at ils.nwu.edu

 


More information about the Connectionists mailing list