catastrophic interference of BP

Dr Neil Burgess - Anatomy UCL London ucganlb at ucl.ac.uk
Tue Dec 20 12:54:01 EST 1994


>> Studies of catastrophic interference in BP networks are
>> interesting when considering such a network as a model of some human
>> (or animal) memory system. 
>> Is there any reason for doing that?
>> Neil

> So far Jay McClelland has replied: his recently advertised 
> tech. report provides an example of useful consideration of cat. int. with repect to
> the possible existence of complimentary learning systems in the brain, and attempts 
> to distinguish between the specific properties of BP and the more general question.
> Japp Murre also replied that he partially addresses the problem in [1].

    Generally, I think that caution should be expressed in generalising
between artificial learning mechanisms. It may be that learning within a
system with fixed parameters, mediated by iterative minimisation of some
global attribute (e.g. sum-squared error) will tend to show interference
with `catastrophic' characteristics, although how it manifests itself will
depend on the details of the algorithm.
    But I would be surprised if other algorithms, e.g.  learning by
piecemeal constructive algorithms (where extra units are added for a
specific local task, such as [2]) behaved like that - so e.g. we might not
expect learning like song-learning in birds (associated with neural growth)
to necessarily show the same type of interference.
    I also suspect that the characteristics of interference differ between
iterative algorithms in which the `training set' must be presented many
times (e.g. BP) and `one-shot' learning algorithms (e.g. Hopfield model).
In the former case there is obviously a problem of how to interleave new
data into the training set, in the latter case there is no such problem,
and slight variations of the `Hebbian' learning rule can produce
imprinting, primacy, recency or combinations of the above [3].
    Given the availiblity of alternatives, it is not clear that BP should
always be the canonical choice for modelling learing and memory. It is
certainly not the easiest to motivate biologically.

Merry Christmas,

Neil

[1] report:  ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/hyper1.ps
[2] M. R. Frean, `The Upstart algorithm: A method for
  constructing and training feedforward neural networks',
  {\it Neural Computation,} {\bf 2}, 198-209 (1990).
[3] `List Learning in Neural Networks', Neil Burgess, J. L. Shapiro
  and M. A. Moore, {\it Network} {\bf 2} 399-422, 1991.




More information about the Connectionists mailing list