PhD Thesis Available

Randall O'Reilly oreilly at flies.mit.edu
Tue Sep 17 17:20:47 EDT 1996


My PhD thesis is avialable for anonymous ftp downloading:

ftp://hydra.psy.cmu.edu/pub/user/oreilly/oreilly_thesis.tar.gz

it is 1,085,460 bytes and un-tars into roughly 6 meg of under 1 meg
postscript files.

--------------------------

The LEABRA Model of Neural Interactions and Learning in the Neocortex

			 Randall C. O'Reilly

	       Center for the Neural Basis of Cognition
		       Department of Psychology
		      Carnegie Mellon University

There is evidence that the specialized neural processing systems in
the neocortex, which are responsible for much of human cognition,
arise from the action of a relatively general-purpose learning
mechanism.  I propose that such a neocortical learning mechanism can
be best understood as the combination of error-driven and
self-organizing (Hebbian associative) learning.  This model of
neocortical learning, called LEABRA (local, error-driven and
associative, biologically realistic algorithm), is computationally
powerful, has important implications for psychological models, and is
biologically feasible.  The thesis begins with an evaluation of the
strengths and limitations of current neural network learning
algorithms as models of a neocortical learning mechanism according to
psychological, biological, and computational criteria.  I argue that
error-driven (e.g., backpropagation) learning is a reasonable
computational and psychological model, but it is biologically
implausible.  I show that backpropagation can be implemented in a
biologically plausible fashion by using interactive (bi-directional,
recurrent) activation flow, which is known to exist in the neocortex,
and has been important for accounting for psychological data.
However, the interactivity required for biological and psychological
plausibility significantly impairs the ability to respond
systematically to novel stimuli, making it still a bad psychological
model (e.g., for nonword reading).  I propose that the neocortex
solves this problem by using inhibitory activity regulation and
Hebbian associative learning, the computational properties of which
have been explored in the context of self-organizing learning models.
I show that by introducing these properties into an interactive
(biologically plausible) error-driven network, one obtains a model of
neocortical learning that: 1) provides a clear computational role for
a number of biological features of the neocortex; 2) behaves
systematically on novel stimuli, and exhibits transfer to novel tasks;
3) learns rapidly in networks with many hidden layers; 4) provides
flexible access to learned knowledge; 5) shows promise in accounting
for psychological phenomena such as the U-shaped curve in
over-regularization of the past-tense inflection; 6) has a number of
other nice properties.

---------------------------------------------
Note that I am now doing a postdoc at at MIT: 

Center for Biological and Computational Learning
Department of Brain and Cognitive Sciences
E25-210, MIT
Cambridge, MA 02139
oreilly at ai.mit.edu



More information about the Connectionists mailing list