Generativity and systematicity, learning: Report announcement
Olivier Brousse
olivier at dendrite.cs.colorado.edu
Thu Oct 14 15:06:56 EDT 1993
The following report is now available via anonymous
ftp on the cis.ohio-state.edu ftp server, directory
pub/neuroprose, file brousse.sysgen.ps.Z
Pages: 180, size: 893720 bytes.
Title: Generativity and Systematicity
in Neural Network Combinatorial Learning
It is also available via surface mail, as:
Technical Report CU-CS-676-93, for a small fee, $5 I believe
from
Attn: Vicki Emken
Department of Computer Science, Box 430
University of Colorado at Boulder
Boulder, CO 80309-0430, U.S.A.
Abstract:
This thesis addresses a set of problems faced by connectionist learning
that have originated from the observation that connectionist cognitive
models lack two fundamental properties of the mind: Generativity,
stemming from the boundless cognitive competence one can exhibit, and
systematicity, due to the existence of symmetries within them. Such
properties have seldom been seen in neural networks models, which have
typically suffered from problems of inadequate generalization, as
examplified both by small number of generalizations relative to
training set sizes and heavy interference between newly learned items
and previously learned information.
Symbolic theories, arguing that mental representations have syntactic
and semantic structure built from structured combinations of symbolic
constituents, can in principle account for these properties (both
arise from the sensitivity of structured semantic content with a
generative and systematic syntax). This thesis studies the question of
whether connectionism, arguing that symbolic theories can only provide
approximative cognitive descriptions which can only be made precise
at a sub-symbolic level, can also account for these properties.
Taking a cue from the domains in which human learning most
dramatically displays generativity and systematicity, the answer is
hypothesized to be positive for domains with combinatorial structure.
A study of such domains is performed, and a measure of combinatorial
complexity in terms of information/entropy is used. Experiments are
then designed to confirm the hypothesis. It is found that a basic
connectionist model trained on a very small percentage of a simple
combinatorial domain of recognizing letter sequences can correctly
generalize to large numbers of novel sequences. These numbers are found
to grow exponentially when the combinatorial complexity of the domain
grows. The same behavior is even more dramatically obtained with
virtual generalizations: new items which, although not correctly
generalized, can be learned in a few presentations while leaving
performance on the previously learned items intact.
Experiments are repeated with fully-distributed representations, and
results imply that performance is not degraded. When weight
elimination is added, perfect systematicity is obtained. A formal
analysis is then attempted in a simpler case. The more general case is
treated with contribution analysis.
To retrieve and print:
unix> ftp archive.cis.ohio-state.edu
Name: anonymous
230 Guest login ok, access restrictions apply.
ftp> cd pub/neuroprose
ftp> binary
ftp> get brousse.sysgen.ps.Z
200 PORT command successful.
ftp> quit
unix> zcat brousse.sysgen.ps.Z | lpr
or
unix> zcat brousse.sysgen.ps.Z | lpr -s
- Olivier Brousse
olivier at cs.colorado.edu
More information about the Connectionists
mailing list