More on AI vs. NN

Jim Hendler hendler at cs.UMD.EDU
Thu Dec 20 09:18:09 EST 1990


 I guess I feel compelled to add my two cents to this.  Here's a copy
of an editorial I wrote for a special issue of the journal Connection
Science (Carfax Publ.)  The issue concerned models that combined connectionist
and symbolic components:
----------------------------
On The Need for Hybrid Systems
J. Hendler
(From:  Connection Science 1(3), 1989 )
 
It is very easy to make an argument in favor of the development of
hybrid connectionist/symbolic systems from an engineering (or
task-oriented) perspective.  After all, there is a clear and present
need for developing systems which can perform both ``perceptual'' and
``cognitive'' tasks.  Some examples include:

   Planning applications  where recognition of other agents'
   plans must be coupled with intelligent counteractions,
 
   Speech understanding programs  where speech processing, which has
   been most successful as a signal processing application, needs to
   be coupled with syntactic and semantic processing, 
 
   Automated manufacturing or testing applications  where visual
   perception needs to be coupled with expert reasoning, and
 
   Expert image processing systems  where line or range detectors, 
   radar signal classifiers, unknown perspective projections, quantitative
   image processors, etc. must be coupled with top-down knowledge
   sources such as maps and models of objects.
  
To provide systems which can provide both ``low-level'' perceptual
functionality as well as demonstrating high-level cognitive abilities
we need to capture the best features of current connectionist
and symbolic techniques.  This can be done in one of four ways:
  
  We can figure out a methodology for getting traditional AI
systems to handle image and signal processing, to handle pattern
recognition, and to reason well about perceptual primitives,
 
  We can figure out a methodology for getting connectionist systems
to handle ``high-level'' symbol-processing tasks in applied domains.  This
might involve connectionist systems which can manipulate data
structures, handle variable binding in long inference chains, deal
with the control of inferencing, etc.,
 
  We can work out a new ``paradigm,'' yet another competitor to
enter the set of possible models for delivering so-called intelligent
behavior,
 
  Or, we can take the current connectionist systems and the current
generation of AI systems and produce hybrid systems exploiting the
strengths of each.
 
While the first three of these are certainly plausible approaches, and
all three are currently driving many interesting research projects,
they require major technological breakthroughs, and much rethinking of
the current technologies.  The fourth, building hybrid models,
requires no major developments, but rather the linking of current
technologies.  This approach, therefore appears to provide the path of
least resistance in the short term.  From a purely applied
perspective, we see a fine reason to pursue the building of hybrid
models.
 
If this were the only reason for building hybrid models, and it is a
strong one, it would legitimize much research in this area.  The
purpose of this editorial however, and in fact the rationale behind
the editing of this special issue on hybrid models, is to convince the
reader that there is more to hybrid models than simply a merging of
technologies for the sake of building new applications: In particular,
that the development of hybrid models holds major promise for
bettering our understanding of human cognition and for helping to
point the way in the development of future cognitive modeling
techniques.
 
This claim is based on facing up to reality: neither the current AI
nor the current connectionist paradigms appear to be sufficient for
providing a basic understanding of human cognitive processing.  I
realize this is quite a contentious statement, and I won't try to
defend it rigorously in this short article.  Instead, I'll try to
outline the basic intuition behind this statement.
 
Essentially, the purely symbolic paradigm of much of AI suffers from
not being grounded in perception.  Many basic types of cognitive
processing, particularly those related to vision and the other senses,
have been formed by many generations of evolution.  While it is
possible that a symbolic mechanism could duplicate the abilities of
these ``hard-wired'' systems, it seems unlikely.  Higher level
cognitive abilities, such as understanding speech or recognizing
images, which do not attempt to use low-level models may be doomed to
failure.
 
Consider, for example, the evidence for categorization errors and
priming confusions in humans. Is this evidence of some sort of
weakness in the processing system, or is it a result of the very
mechanisms by which perceptual information processing proceeds in
humans?  If, as many believe, the latter is true, then it would appear
to be the case that the apparatus by which humans perform perceptual
categorization forces categories to have certain properties.  If this
is the case, then ability of humans to perform very broad
generalizations and to recognize commonalities between widely
divergent inputs is probably integrally related to this perceptual
apparatus.  If so, an attempt to model human understanding which
doesn't take the ``limitations'' of this perceptual categorization
mechanism seriously may be doomed to failure.  Further, it may even be
the case that any attempt to use a more perfect scheme for
categorization will miss having this critical property.  Thus,
understanding perceptual processing, as it is implemented in the
brain, may be crucial to an understanding of cognition as performed by
the human problem solver.
 
The connectionist approach, sometimes called the subsymbolic paradigm,
suffers from a related problem.  While current research appears to
indicate that this approach may be better for modeling the lower level
cognitive processes, there seems to be something crucially different
between human cognition and that of other animals.  It seems unlikely
that this difference can be captured by a purely ``brain-based''
explanation.  Human problem solving requires abilities in
representation (the often cited issue of ``compositionality'' being
one) and in symbol manipulation, which are currently beyond the scope
of the connectionist approach (except in very simplified cases).
While it is possible that brain size itself explains the differences
between human and animal cognition, many researchers seem to believe
that the answer requires more than this.  Explaining human thinking
requires more than simply explaining the purely associative mechanisms
found in most mammals understanding these mechanisms is necessary for
a deeper understanding of human cognition, but it is not
 sufficient .  Thus a connectionist science which addresses the
associative learning seen in the brain, without regard for the 
cognitive abilities resulting from that learning, is inadequate for 
a full understanding of human cognition.
 
Unfortunately understanding and modeling human cognitive processing in
a way that takes both abilities and implementation into account is not
an easy task.  To solve this problem we will eventually need to
understand both the architecture of the brain and the ``programs''
running on that architecture.  This interconnection between
implementational constraints on the one hand, and functional
requirements on the other, puts many bounds on the set of possible
models which could truly be considered as the basis of human
intelligence. But how do we probe the models lying within these
bounds?  What would they look like?  Can they be based on current
connectionist techniques?  Will they function in a manner similar to
current AI models?
 
We know of no ``correct'' research paradigm for studying these
problems: Connectionist models clearly must be pursued for a deeper
understanding of the ``firmware'' of thought; traditional AI must be
pursued to give us insight into the functional requirements of
thought.  But, it is my contention that a third path must also be
followed: To be able to gain a true insight into what
implementationally correct, cognitively robust models of human
cognition will look like, we need to study models which try to connect
the two paradigms.  Hybrid models, rather than being viewed simply as
a short term engineering solution, may be crucial to our gaining an
understanding of the parameters and functions of biologically
plausible cognitive models.  From this understanding we might hope to
see the development of a new, and potentially more correct, paradigm
for the studying of ``real,'' as opposed to artificial, intelligence.


More information about the Connectionists mailing list