Connectionist symbolic processing

Michiro Negishi negishi at cns.bu.edu
Tue Aug 25 05:11:25 EDT 1998


Here are my 5 cents, from the self-organizing camp.

On Mon, 10 Aug 1998 Dave_Touretzky at cs.cmu.edu wrote:
> The problem, though, was that we
> did not have good techniques for dealing with structured information
> in distributed form, or for doing tasks that require variable binding.
> While it is possible to do these things with a connectionist network,
> the result is a complex kludge that, at best, sort of works for small
> problems, but offers no distinct advantages over a purely symbolic
> implementation.

As many have  already argued, at least empirically  I don't  feel that
the issue of structured data representation  as *the main* obstacle in
constructing a  model of  symbolic   processing,  although it  is   an
interesting and challenging subject.   In my neural model of syntactic
analysis  and thematic   role   assignment for   instance, I use   the
following neural fields for representing a word/phrase.

(1) A field for representing the word or the head of the phrase.
    (there is a computational algorithm for determining the head of a
    phrase)
(2) Fields for representing the features of the word/phrase as well
    as its children in the syntactic tree (or the semantic structure).
    Features are obtained by the PCA over the context in which the
    word/which appears.
(3) Associator fields for retrieving children and the parent.

In plain  words, (1) is the lexical  information, (2) is  the featural
information, and  (3)  is  the  associative  pointer.   The  resultant
representation   is  similar to RAAM.   A   key  point  in the feature
extraction  in the  model is that   once the parser begins  to combine
words into phrases, it begins to collect distributions in terms of the
heads of the phrases, which in turn is used in the  PCA. 

The model  was trained using a corpus  that contains mothers' input to
the children (a part   of the CHILDES  corpus),  so it's not a   "toy"
model, although it's not as  good as being able to  cope with the Wall
Street Journal yet,   (I have to  crawl  before I  walk :) which   was
expectable   considering the  very strict  learning  conditions of the
model:   no initial   lexical  or syntactic    knowledge,  no external
corrective signals from a teacher.

I   think it's a  virtue  rather   than a   defect that  this  type of
representation does  not  represent all concepts at  a  time.  In many
languages, each word represents  only very limited number  of concepts
at most, although  it can also convey  many features of itself and its
children (eg.   in many languages,  agreement morphemes attached  to a
verb encode gender,  person, etc.  of the subject  and objects).  Also
there are island effects, which shows that  production of a clause can
have  access only to the concept  itself and  its direct children (and
not internal structure below each child).

I think  that  the real challenge is   to do a   cognitively plausible
modeling that sheds  a new light  to the understanding of language and
cognition. That is why I constrain myself to self-organizing networks.

As for future direction I agree with Whitney Tabor that application of
the fractal  theory  may be    a promising  direction.    I would   be
interested to   know if  some one  tried  to  interpret  HPSG or  more
classical X-bar theory as fractals.



Here are  some refs on self-organizing  models of language (except for
the famous ones by Miikkulainen). This line of research is alive, and
will kick soon.

Ritter,  H.  and Kohonen,  T.  (1990). Learning semantotopic maps from
context. Proceedings of IJCNN 90, Washington D.C., I.

Sholtes,  J.  C.   (1991). Unsupervised context    learning in natural
language processing. In Proc. IJCNN Seattle 1991.

M.  Negishi (1995) Grammar  learning  by a self-organizing network. In
Advances in Neural Information Processing Systems 7, 27-35. MIT Press.


My unpublished thesis work is accessible from

http://cns-web.bu.edu/pub/mnx/negishi.html

-----------------------------------------------------
                 Michiro Negishi 
-----------------------------------------------------
Dept. of Cognitive & Neural Systems, Boston Univ.
    677 Beacon St., Boston,   MA 02215 
Email: negishi at cns.bu.edu         Tel: (617) 353-6741
-----------------------------------------------------




More information about the Connectionists mailing list