Restrictions on recurrent learning

Daniel Glaser danielg at cogs.sussex.ac.uk
Wed Oct 9 07:07:27 EDT 1991


I have been working on some simple recurrent networks as defined by
Jordan(1986) and Elman(1990), and am interested in the class of temporal
regularities that they can learn. In particular, how do they compare
with more general back propagation through time defined by the PDP
group(1986) and Werbos(1990) ?

In the Jordan/Elman nets, activation flows forward in time from
`copies' of units from previous cycles, and thus, during learning,
error only propagates backwards locally in time.

Does anyone know of any theoretical or empirical work on what these
different types of network can learn ?

If replies are addressed to me personally, I will post a summary in
due course.

Thanks
 
Daniel.

References:

Elman, J.~L. (1990).
Finding structure in time.
{\em Cognitive Science}, {\bf 14}:179--211.

Jordan, M.~I. (1986).
Attractor dynamics and parallelism in a connectionist sequential
 machine.
In {\em Proceedings of the Eighth Annual Meeting of the Cognitive
Science Society}, Hillsdale, NJ. Erlbaum.

Rumelhart, D.~E., McClelland, J.~L., \& Williams, R.~J. (1986).
Learning internal representations by error propagation.
In D.~E. Rumelhart \& J.~L. McClelland (Eds.), {\em Parallel
  Distributed Processing: Explorations in the Microstructure of Cognition},
  volume~1  chapter~8. Cambridge, MA: MIT Press/Bradford Books.

Werbos, P.~J. (1990).
Backpropagation through time: What it does and how to do it.
{\em Proceedings of the IEEE}, 78(10):1550--1560.



More information about the Connectionists mailing list