Restrictions on recurrent learning

Gary Cottrell gary at cs.UCSD.EDU
Thu Oct 10 13:26:04 EDT 1991


Fu-Sheng Tsung and I showed there were problems that a
hidden-recurrent (Elman-style) net can learn that an output-recurrent
Jordan net can't in our 1989 paper in IJCNN:

Tsung, Fu-Sheng and Cottrell, G. (1989) A sequential adder using recurrent
networks. In \fIProceedings of the International Joint Conference on
Neural Networks\fP, Washington, D.C.

A similar paper with some state space analysis is in:
Cottrell, G. and Fu-sheng Tsung (1991). Learning simple arithmetic procedures.
In J.A. Barnden & J.B. Pollack (Eds),
\fIAdvances in connectionist and neural computation theory, Vol 1:
High-level connectionist models\fP, Norwood: Ablex.

There are simple logical arguments that show that hidden-recurrent
nets are more powerful than output-recurrent nets. The bottom line is
that if there is a problem where the teaching signal forces
"forgetting" of the input, then a Jordan-style output-recurrent
network cannot respond to things that require remembering it.

Hal White also believes Elman nets are strictly more powerful than Jordan
nets, but I'm not sure he has a proof.

gary cottrell 619-534-6640 Sec'y: 619-534-5288 FAX: 619-534-7029
Computer Science and Engineering C-014
UCSD, 
La Jolla, Ca. 92093
gary at cs.ucsd.edu (INTERNET)
{ucbvax,decvax,akgua,dcdwest}!sdcsvax!gary (USENET)
gcottrell at ucsd.edu (BITNET)


More information about the Connectionists mailing list