meta-learning/fixed-weight learning without LSTM?

Prokhorov, Danil (D.V.) dprokhor at ford.com
Mon Sep 3 11:12:59 EDT 2001


Dear Connectionists,

After two very recent messages of Juergen Schmidhuber and A. Steven
Younger drawing your attention to meta-learning and fixed-weight
learning, I thought I would provide you with yet another reference and
the abstract of our recent paper fixed-weight learning published in
Proceedings of the 11th Yale Workshop on Adaptive and Learning Systems
(June 4-6, 2001).

L. A. Feldkamp, D. V. Prokhorov, and T. M. Feldkamp, Conditioned
Adaptive Behavior from a Fixed Neural Network, Proceedings of the
Eleventh Yale Workshop on Adaptive and Learning Systems, New Haven,
CT, pp. 78-83, 2001.

Abstract (sorry for its conciseness)

We demonstrate that a fixed-weight recurrent neural network
(RMLP-style) can be trained to exhibit input-output behavior that
depends on which of two conditioning tasks had been performed a
substantial number of time steps in the past.  This behavior can also
be made to survive an intervening interference task.

____________________________________________________________________________
It may be difficult to find these Proceedings though.  Please contact
Danil Prokhorov (dprokhor at ford.com) or Lee Feldkamp
(lfeldkam at ford.com) to receive an electronic copy of this paper.

Sincerely,
Danil Prokhorov

Artificial Neural Systems and Fuzzy Logic Group
Ford Research Laboratory
2101 Village Rd., MD 2036
Dearborn, MI 48121-2053, U.S.A.




More information about the Connectionists mailing list