Building large-scale hierarchical models of the world...

Dmitri Rachkovskij ml_conn at infrm.kiev.ua
Wed Feb 7 16:00:18 EST 2001


Keywords: analogy, analogical mapping, analogical retrieval, APNN,
associative-projective neural networks, binary coding, binding,
categories, chunking, compositional distributed representations,
concepts, concept hierarchy, connectionist symbol processing,
context-dependent thinning, distributed memory, distributed
representations, Hebb, long-term memory, nested representations,
neural assemblies, part-whole hierarchy, representation of structure,
sparse coding, taxonomy hierarchy, thinning, working memory, world model

Dear Colleagues,

The following paper draft (abstract enclosed):

Dmitri Rachkovskij & Ernst Kussul:
"Building large-scale hierarchical models of the world
with binary sparse distributed representations"

is available at
http://cogprints.soton.ac.uk/documents/disk0/00/00/12/87/index.html
or by the ID code: cog00001287 at http://cogprints.soton.ac.uk/

Comments are welcome!

Thank you and best regards,
Dmitri Rachkovskij
*************************************************************************
Dmitri A. Rachkovskij, Ph.D.                       Net: dar at infrm.kiev.ua
Senior Researcher,
V.M.Glushkov Cybernetics Center,                   Tel: 380 (44) 266-4119
Pr. Acad. Glushkova 40,
Kiev 03680, UKRAINE                                Fax: 380 (44) 266-1570
*************************************************************************
Encl: Abstract
Many researchers agree on the basic architecture of the
"world model" where knowledge about the world required for
organization of agent's intelligent behavior is represented.
However, most proposals on possible implementation of such a
model are far from being plausible both from computational and
neurobiological points of view.
    Implementation ideas based on distributed connectionist
representations offer a huge information capacity and flexibility
of similarity representation. They also allow a distributed neural
network memory to be used that provides an excellent storage
capacity for sparse patterns and naturally forms generalization
(or concept, or taxonomy) hierarchy using the Hebbian learning
rule. However, for a long time distributed representations
suffered from the "superposition catastrophe" that did not allow
nested part-whole (or compositional) hierarchies to be handled.
Besides, statistical nature of distributed representations demands
their high dimensionality and a lot of memory, even for small
tasks.
    Local representations are vivid, pictorial and easily
interpretable, allow for an easy manual construction of both
types of hierarchies and an economical computer simulation of
toy tasks. The problems of local representations show up with
scaling to the real world models. Such models include an
enormous number of associated items that are met in various
contexts and situations, comprise parts of other items, form
multitude intersecting multilevel part-whole hierarchies, belong
to various category-based hierarchies with fuzzy boundaries
formed naturally in interaction with environment. It appears
that using local representations in such models becomes less
economical than distributed ones, and it is unclear how to solve
their inherent problems under reasonable requirements imposed
on memory size and speed (e.g., at the level of mammals' brain).
    We discuss the architecture of Associative-Projective
Neural Networks (APNNs) that is based on binary sparse
distributed representations of fixed dimensionality for items
of various complexity and generality. Such representations are
rather neurobiologically plausible, however we consider that the
main biologically relevant feature of APNNs is their promise for
scaling up to the full-sized adequate model of the real world,
the feature that is lacked by implementations of other schemes
    As in other schemes of compositional distributed
representations, such as HRRs of Plate and BSCs of Kanerva,
an on-the-fly binding procedure is proposed for APNNs. It
overcomes the superposition catastrophe, permitting the order
and grouping of component items in structures to be represented.
The APNN representations allow a simple estimation of structures'
similarity (such as analogical episodes), as well as finding
various kinds of associations based on context-dependent
similarity of these representations. Structured distributed
auto-associative neural network of feedback type is used as
long-term memory, wherein representations of models organized
into both types of hierarchy are built. Examples of schematic
APNN architectures and processes for recognition, prediction,
reaction, analogical reasoning, and other tasks required for
functioning of an intelligent system, as well as APNN
implementations, are considered.
---------------------------------------------------------------------






More information about the Connectionists mailing list