Response to Jackendoff's challenges -- notice of conference presentation and availability of paper

Ross Gayler r.gayler at mbox.com.au
Thu Jul 3 19:26:23 EDT 2003


The linguist, Ray Jackendoff, proposed four challenges to cognitive
neuroscience in his book "Foundations of Language".  Each challenge
corresponds to an element of core linguistic functionality which Jackendoff
sees as being poorly addressed by current connectionist models.

On August 5, 2002, Jerome Feldman broadcast these challenges to the
Connectionists mailing list under the subject "Neural binding".  After
receiving several responses, Feldman concluded on August 21 that "it isn't
obvious (at least to me) how to use any of the standard techniques to
specify a model that meets Jackendoff's criteria".

I have prepared a paper setting out how I believe one family of
connectionist architectures can meet Jackendoff's challenges.  This will be
presented at the Joint International Conference on Cognitive Science to be
held in Sydney, Australia from 13 - 17 July, 2003
(http://www.arts.unsw.edu.au/cogsci2003/).  If you will be attending the
conference and wish to hear the presentation - it is currently scheduled for
1 p.m. on Thursday 17th in the Language stream
(http://www.arts.unsw.edu.au/cogsci2003/conf_content/program_thurs_pm.htm).
An extended abstract is included below and anyone who wishes a preprint copy
of the paper (which is very condensed to fit the conference format) should
e-mail me at r.gayler at mbox.com.au

Ross Gayler
Melbourne, AUSTRALIA

r.gayler at mbox.com.au
+61 413 111 303 mobile

Vector Symbolic Architectures answer Jackendoff's challenges for cognitive
neuroscience.
Ross Gayler
Vector Symbolic Architectures (Gayler, 1998; Kanerva, 1997; Plate, 1994) are
a little-known class of connectionist models that can directly implement
functions usually taken to form the kernel of symbolic processing.  They are
an enhancement of tensor product variable binding networks (Smolensky,
1990).

Like tensor product networks, VSA's can create and manipulate
recursively-structured representations in a natural and direct connectionist
fashion without requiring lengthy training.  However, unlike tensor product
networks, VSA's afford a practical basis for implementations because they
require only fixed dimension vector representations.  The fact that VSA's
relate directly, without training, to both simple, practical vector
implementations and core symbolic processing functionality suggests that
they would provide a fruitful connectionist basis for the implementation of
cognitive functionality.

Ray Jackendoff (2002) posed four challenges that linguistic combinatoriality
and rules of language present to theories of brain function.  These
challenges are: the massiveness of the binding problem, the problem of
dealing with multiple instances, the problem of variables, and the
compatibility of representations in working memory and long-term memory.
The essence of these problems is the question of how to neurally instantiate
the rapid construction and transformation of the compositional structures
that are typically taken to be the domain of symbolic processing.
Drawing on work by Gary Marcus (2001), Jackendoff contended that these
challenges had not been widely recognised in the cognitive neuroscience
community and that the dialogue between linguistic theory and neural network
modelling would be relatively unproductive until the challenges were
answered by some technical innovation in connectionist models.  Jerome
Feldman (2002) broadcast these challenges to the neural network modelling
community via the Connectionists Mailing List.  The few responses he
received were unable to convince Feldman that any standard connectionist
techniques would meet Jackendoff's criteria.

In this paper I demonstrate that Vector Symbolic Architectures are able to
meet Jackendoff's challenges.

References
Feldman, J. (2002).  Neural binding.  Posted to Connectionists Mailing List,
5th August, 2002.
(http://www-2.cs.cmu.edu/afs/cs.cmu.edu/project/connect/connect-archives/arc
h.2002-08.gz 0005.txt see also 8, 9, 18, and 21)
Gayler, R. W. (1998).  Multiplicative binding, representation operators, and
analogy.  <http://cogprints.ecs.soton.ac.uk/archive/00000502/> In K.
Holyoak, D. Gentner & B. Kokinov (Eds.), Advances in analogy research:
Integration of theory and data from the cognitive, computational, and neural
sciences (p. 405).  Sofia, Bulgaria: New Bulgarian University.
(http://cogprints.ecs.soton.ac.uk/archive/00000502/ see also 500 and 501)
Jackendoff, R. (2002).  Foundations of language: Brain, meaning, grammar,
evolution. Oxford: Oxford University Press.
Kanerva, P. (1997).  Fully distributed representation.  In Proceedings Real
World Computing Symposium (RWC'97, Tokyo).  Report TR-96001 (pp. 358-365).
Tsukuba-city, Japan: Real World Computing Partnership.
(http://www.rni.org/kanerva/rwc97.ps.gz see also
http://www.rni.org/kanerva/pubs.html)
Marcus, G. (2001). The algebraic mind. Cambridge, MA, USA: MIT Press.

Plate, T. A. (1994).  Distributed representations and nested compositional
structure.  Ph.D. thesis, Department of Computer Science, University of
Toronto.
(http://pws.prserv.net/tap/papers/plate.thesis.ps.gz see also
http://pws.prserv.net/tap/)
Smolensky, P. (1990). Tensor product variable binding and the representation
of symbolic structures in connectionist systems. Artificial Intelligence,
46, 159-216.






More information about the Connectionists mailing list