Why does the error rise in a SRN?

Pankaj Mehra p-mehra at uiuc.edu
Fri Apr 3 14:45:54 EST 1992


In response to:
Simon Dennis <mav at cs.uq.oz.au>
                                      Something which seems to happen
 with surprising frequency is that the error will decrease for a period
 and then will start to increase again.
 
 Questions:
 (3) Why does it happen?
 ---------
Ray Watrous <watrous at cortex.siemens.com>
 An increase in error can occur with fixed step size algorithms
 ... a well-known property of such algorithms, but seems to be
 encountered in practice more frequently with recurrent networks.
 ... small changes in some regions of weight
 space can have large effects on the error because of the nonlinear
 feedback in the recurrent network.
 ---------
Minh.Tue.Vo at cs.cmu.edu
 the effect somewhat by tweaking the learning rate and the momentum, but
 I couldn't eliminate it completely.  TDNN doesn't seem to have that
 problem.
 ---------

Pineda (1988) explains this sensitivity to learning rate/step-size very
well. On pages 223 and 231 of that paper, he shows that "adiabatic"
weight modification (= slowness of learning rate w.r.t. the fluctuations
at the input) is important for learning to converge.

TDNNs work because they do not exhibit the same kind of feedback dynamics as
the recurrent networks of Jordan and Elman.

Pineda, Fernando J., Dynamics and Architectures for Neural Computation, Jrnl.
of Complexity, Vol. 4, pp. 216-245, 1988.

-Pankaj




More information about the Connectionists mailing list