About sequential learning (or interference)

DeLiang Wang dwang at cis.ohio-state.edu
Mon Dec 12 10:50:23 EST 1994


It was reported some time ago that multilayer perceptrons suffer the problem
of so-called "catastrophic interference", meaning that later training will
destroy previously acquired "knowledge" (see McCloskey & Cohen, Psychol. of
Learning and Motivat. 24, 1990; Ratcliff, Psychol. Rev. 97, 1990). This seems
to be a serious problem, if we want to use neural networks both as a stable
knowledge store and a long-term problem solver.

The problems seems to exist in associative memory models as well, even though
the simple prescription of the Hopfield net can easily incorporate more
patterns as long as they are within the capacity limit. But we all know that
the original Hopfield net does not work well for correlated patterns, which
are the most interesting ones for real applications. Kanter and Sompolinsky
proposed a prescription to tackle the problem (Phys. Rev. A, 1987). 
Their prescription, however, requires nonlocal learning. 

Diederich and Opper (1987, Phys. Rev. Lett.) later proposed a local, 
iterative learning rule (similar to perceptron learning) that they show will 
converge to the prescription (which is the old idea of orthogonalization). 
According to their paper, to learn a new pattern one needs to bring in all 
previously acquired patterns during iterative training in order 
to make all of the patterns converge to the desired prescription.  Because
of this learning scheme, I suspect that the algorithm of Diederich and Opper
also suffers "catastrophic forgetting".

Is that a fair assessment of the problem?  Has any major effort been taken to 
address this important problem? 

DeLiang Wang



More information about the Connectionists mailing list