On blurring the gap between NN and AI

Scott.Fahlman@SEF-PMAX.SLISP.CS.CMU.EDU Scott.Fahlman at SEF-PMAX.SLISP.CS.CMU.EDU
Thu Dec 27 20:15:39 EST 1990


    ...But could someone please tell me then why it is
    so common, both in the academic and the popular literature, to talk
    about AI and connectionism as if they were two separate fields?
    
I'll try...

AI and connectionism have the same long-term goals: to understand this
complex, wonderful phenomenon that we call intelligence.  We ultimately
want to understand intelligence at all levels and in precise mechanistic
terms -- sufficient, in principle, to allow us to emulate the phenomenon on
a machine of some kind.  If you define the field in terms of its goals, it
is clear that there is only one field here -- call it what you like.

Both the traditional AI and connectionist camps include some people who are
basically psychologists: they want want to understand how human and animal
intelligence works.  To these people, computer simulations are useful, but
only as tools that help us to explore the capabilities and limitations of
various models.  Both camps also contain people who are basically
engineers: they want to build more intelligent widgets, and if one can
derive some useful ideas of constraints by investigating the workings of
the human mind, that makes the task a bit easier.  Psychology, to these
people, is just a fertile source of search-guiding heuristics.
And, of course, there are lots of people whose interests fall between these
two extremes.  But the engineer/psychologist split is orthogonal to the
AI/connectionist split.

What separates traditional, mainstream AI from connectionism is the choice
of tools, and a parallel choice of what parts of the problem to work on now
and what parts to defer until later (if ever).  Traditional AI has had a
good deal of success building upon the central ideas of heuristic search
and symbolic description.  These tools seem to be the right ones for
modelling high-level conscious reasoning in clean, precise problem domains
(or those in which the messy bits can be safely ignored).  These tools are
not so good at low-level sensory/motor tasks, flashes of recognition, and
the like.  AI people respond to this limitation in a variety of ways: some
define the problem away by saying that this low level stuff is not really a
part of "intelligence"; some say that it's important, but that we'll get to
it later, once the science and technology of symbolic AI has progressed
sufficiently; and some admit that connectionism probably offers a better
set of tools for handling the low-level and messy parts of thought.

Connectionism offers a different set of tools.  These tools seem to be
better for fuzzy, messy problems that are hard to cast into the rigid
framework of symbols and propositions; they are not so good (yet) for
modeling what goes on in high-level reasoning.  Connectionists respond to
these evident limitions in a number of ways: some believe that high-level
symbolic reasoning will more-or-less automatically fall out of
connectionist models once we have the tools to build larger, more complex
nets; some believe that we should get to work now building hybrid
connectionist/symbolic systems; some just think we can put off the problem
of high-level reasoning for now (as evolution did for 4.5 billion years).

Many mainstream AI people like to invoke the "Turing barrier": they imagine
that their system runs on some sort of universal computational engine, so
it doesn't really matter what that engine is made of.  The underlying
hardware can be parallel or serial, slow or fast -- that's just a matter of
speed, not a matter of fundamental capability.  Of course, that's just
another way of focusing on part of the problem while deferring another part
-- sooner or later, whether we are engineers or psychologists, we will have
to understand the speed issues as well.

Some important mental operations (e.g. "flashes" of recognition) occur so
fast that it is hard to come up with a serial model that does the job.  One
can work on these speed/parallelism issues without leaving the world of
hard-edged symbolic AI; my old NETL work was a step in this direction, and
there are several other examples.  But in connectionism, the tradition has
been to focus on the parallel implementation as an essential part of the
picture, along with the representations and algorithms.  Because of the
Turing barrier, AI people and biologists may feel that they have little to
learn from one another; no such separation exists in connectionism, though
we can certainly argue about whether our current models have abstracted
away all biological relevance and whether that matters.

So I would say that connectionism and traditional AI are attacking the same
huge problem, but beginning at opposite ends of the problems and using very
different tools.  The intellectual skills needed by people in the two areas
are very different: continous math and statistics for connectionists;
discrete algorithms and logic for the AI people.  Neither approach has a
single, coherent "philosophy" or "model" or "mathematical foundation" that
I can see -- I'm not really sure what sort of foundation Lev Goldfarb is
talking about -- but there are two loose collections of techniques that
differ rather dramtically in style.

AI/Connectionism can be thought of as one field or two very different ones.
It depends on whether you want to emphasize the common goals or the very
different tools used by the two groups.  One can define AI in an
all-encompassing way, or one can define it in a way that emphasizes the use
of hard-edged symbols and that rules out both connectionism and fuzzy
logic.  I prefer the broader definition -- it makes it a bit easier for us
unprincipled pragmatists to sneak back and forth between the two camps --
but it is seldom worth arguing about where to draw the boundaries of a
field.

-- Scott




More information about the Connectionists mailing list