Connectionist symbol processing: any progress?

Ross Gayler r.gayler at psych.unimelb.edu.au
Fri Aug 14 07:57:14 EDT 1998


At 11:33 12/08/98 -0700, Jerry Feldman 
<jfeldman at ICSI.Berkeley.EDU> wrote:

..
> It is true that none of this is much like Touretsky's 
>early attempt at a holographic LISP and that there has 
>been essentially no work along these lines for a decade. 
>There are first order computational reasons for this. 
>These can be (and have been) spelled out technically
>but the basic idea is straightforward - PDP (Parallel 
>Distributed Processing) is a contradiction in terms. To 
>the extent that representing a concept involves all of 
>the units in a system,
>    only one concept can be active at a time.
     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> Dave Rumelhart says this is stated somewhere in 
>the original PDP books, but I forget where. The same 
>basic point accounts for the demise of the physicists' 
>attempts to model human memory as a spin glass. 
>Distributed representations do occur in the brain and
>are useful in many tasks, conceptual representation just 
>isn't one of them.
..

I would like to see where it has been "spelled out 
technically" that in a connectionist system "only one 
concept can be active at a time", because there must be
some false assumptions in the proof.  This follows from the 
fact that the systems developed by, for example, Smolensky, 
Kanerva, Plate, Gayler, and Halford et al *depend* on the 
ability to manipulate multiple superposed representations,
and they actually work.

I do accept that

> It is true that none of this is much like Touretsky's 
>early attempt at a holographic LISP

and partially accept that there has 

>been essentially no work along these lines for a decade. 

but explain it by:

1) Touretzky's work was an important demonstration of 
technical capability but not a serious attempt at a
cognitive architecture.  There is no reason to extend
that particular line of work.

2) Although the outer-product architectures can (and
have) been used with weight learning procedures, such as
backpropagation, one of their major attractions is that so
much can be achieved without iterative learning.  To pursue
this line of research requires the power to come from the
architecture rather than an optimisation algorithm and a
few thousand degrees of freedom.  Therefore, this line of
research is much less likely to produce a publishable 
result in a given time frame for a fixed effort (because
you can't paper over the gaps with a few extra df).

3) The high-risk, high-effort nature of research into
outer-product cognitive architectures without optimisation
algorithms makes it unattractive to most researchers.
You can't give a problem like this to a PhD student
because you don't know the probability of a publishable
result.  The same argument applies to grant applications.
The rational researcher is better advised to attack a more
obviously soluble problem.

So, I partially disagree with the statement that there has been

>essentially no work along these lines for a decade. 

because there has been related (more cognitively focussed) 
work proceeding for the last decade.  It has just been 
relatively quiet and carried out by a few people who can
afford to take on a high effort, high risk project.

Cheers,
Ross Gayler


More information about the Connectionists mailing list