Papers and PC demo available

Jaap Murre jaap.murre at mrc-apu.cam.ac.uk
Fri Dec 16 08:46:00 EST 1994


The following three files have recently (15-12-1994) been added to our
ftp site (ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre):

File 1:
nnga1.ps   Happel, B.L.M., & J.M.J. Murre (1994). Design and
           evolution of modular neural network architectures. Neural
           Networks, 7, 985-1004. (About 0.5 Mb; ps.Z version is
           recommended.)

Abstract:  To investigate the relations between structure and function in
both artificial and natural neural networks, we present a series of
simulations and analyses with modular neural networks. We suggest a
number of design principles in the form of explicit ways in which neural
modules can cooperate in recognition tasks. These results may
supplement recent accounts of the relation between structure and
function in the brain. The networks used consist out of several modules,
standard subnetworks that serve as higher-order units with a distinct
structure and function. The simulations rely on a particular network
module called CALM (Murre, Phaf, and Wolters, 1989, 1992). This
module, developed mainly for unsupervised categorization and learning,
is able to adjust its local learning dynamics. The way in which modules
are interconnected is an important determinant of the learning and
categorization behaviour of the network as a whole. Based on arguments
derived from neuroscience, psychology, computational learning theory,
and hardware implementation, a framework for the design of such
modular networks is laid-out. A number of small-scale simulation
studies shows how intermodule connectivity patterns implement 'neural
assemblies' (Hebb, 1949) that induce a particular category structure in
the network. Learning and categorization improves as the induced
categories are more compatible with the structure of the task domain. In
addition to structural compatibility, two other principles of design are
proposed that underlie information processing in interactive activation
networks: replication and recurrence. 
      Because a general theory for relating network architectures to
specific neural functions does not exist, we extend the biological
metaphor of neural networks, by applying genetic algorithms (a
biocomputing method for search and optimization based on natural
selection and evolution) to search for optimal modular network
architectures for learning a visual categorization task. The best
performing network architectures seemed to have reproduced some of
the overall characteristics of the natural visual system, such as the
organization of coarse and fine processing of stimuli in separate
pathways. A potentially important result is that a genetically defined
initial architecture cannot only enhance learning and recognition
performance, but it can also induce a system to better generalize its
learned behaviour to instances never encountered before. This may
explain why for many vital learning tasks in organisms only a minimal
exposure to relevant stimuli is necessary.

File 2:
chaos1.ps  Happel, B.L.M., & J.M.J. Murre (submitted). Evolving
           complex dynamics in modular interactive neural networks.
           Submitted to Neural Networks. (This is a large file: 1.5 Mb!
           Retrieve the ps.Z version if possible.)

Abstract:  Computational simulation studies, carried out within a
general framework of modular neural network design, demonstrate that
networks consisting of many interacting modules provide a variety of
different neural processing principles. The dynamics underlying these
behaviors range from simple linear separation of input vectors in
individual modules, to oscillations, evoking chaotic regimes in the
activity evolution of a network. As opposed to static representations in
conventional neural network models, information in oscillatory networks
is represented as space-time patterns of activity. Chaos in a neural
network can serve as: (i) a novelty filter, (ii) explorative deterministic
noise, (iii) a fundamental form of neural activity that provides
continuous, sequential access to memory patterns, and (iv) a mechanism
that underlies the formation of complex categories. An experiment in the
artificial evolution of modular neural architectures, demonstrates that by
manipulating modular topology and parameters governing local learning
and activation processes, "genetic algorithms" can effectively explore
complex interactive dynamics to construct efficient, modular neural
architectures for pattern categorization tasks. A particularly striking
result is that coupled, oscillatory circuits were installed by the genetic
algorithm, inducing the formation of fractal category boundaries.
Dynamic representations in these networks, can significantly reduce
sequential interference due to overlapping static representations in
learning neural networks.

File 3:
The above two papers, among others, describe a digit recognition
network. A demonstration version of this can be retrieved for PCs (486
DX recommended):

digidemo.zip           (unzip with PKUNZIP; contains several files)

Some documentation is included with the program. One of its features is
retraining on digits that go wrong. Only the wrong digits are retrained,
without catastrophic interference with the other digits.

With questions and remarks, contact either Bart Happel at Leiden
University (happel at rulfsw.leiden.univ) or Jaap Murre at the MRC
Applied Psychology Unit (jaap.murre at mrc-apu.cam.ac.uk).





More information about the Connectionists mailing list