Connectionist symbol processing: any progress?

Dmitri Rachkovskij ml_conn at infrm.kiev.ua
Mon Jan 29 17:17:14 EST 2001


Keywords: distributed representation, sparse coding, binary coding, binding,
variable binding, representation of structure, structured representation,
recursive representation, nested representation, compositional distributed
representations, connectionist symbol processing

Dear Colleagues,

The following paper (abstract enclosed):

Rachkovskij, Dmitri A. and Kussul, Ernst M. (2001) Binding and Normalization
of Binary Sparse Distributed Representations by Context-Dependent Thinning.
Neural Computation 13(2): 411-45

will be on-line for NC subscribers at http://neco.mitpress.org/.
For those without access to Neural Computation, a draft is available at:

http://cogprints.soton.ac.uk/documents/disk0/00/00/12/40/index.html

or by ID code: cog00001240 at http://cogprints.soton.ac.uk/

Thank you and best regards,
Dmitri Rachkovskij
*************************************************************************
Dmitri A. Rachkovskij, Ph.D.                       Net: dar at infrm.kiev.ua
Senior Researcher,
V.M.Glushkov Cybernetics Center,                   Tel: 380 (44) 266-4119
Pr. Acad. Glushkova 40,
Kiev 03680, UKRAINE                                Fax: 380 (44) 266-1570
*************************************************************************
Encl: Abstract
Distributed representations were often criticized as inappropriate for
encoding of data with a complex structure. However Plate's Holographic
Reduced Representations and Kanerva's Binary Spatter Codes are recent
schemes that allow on-the-fly encoding of nested compositional structures
by real-valued or dense binary vectors of fixed dimensionality. In this
paper we consider procedures of the Context-Dependent Thinning which were
developed for representation of complex hierarchical items in the
architecture of Associative-Projective Neural Networks. These procedures
provide binding of items represented by sparse binary codevectors (with
low probability of 1s). Such an encoding is biologically plausible and
allows a high storage capacity of distributed associative memory where
the codevectors may be stored. In contrast to known binding procedures,
Context-Dependent Thinning preserves the same low density (or sparseness)
of the bound codevector for varied number of component codevectors.
Besides, a bound codevector is not only similar to another one with
similar component codevectors (as in other schemes), but it is also similar
to the component codevectors themselves. This allows the similarity of
structures to be estimated just by the overlap of their codevectors,
without retrieval of the component codevectors. This also allows an easy
retrieval of the component codevectors. Examples of algorithmic and
neural-network implementations of the thinning procedures are considered.
We also present representation examples for various types of nested
structured data (propositions using role-filler and predicate-arguments
representation schemes, trees, directed acyclic graphs) using sparse
codevectors of fixed dimension. Such representations may provide a
fruitful alternative to the symbolic representations of traditional AI,
as well as to the localist and microfeature-based connectionist
representations.







More information about the Connectionists mailing list