Technical report available
stefano nolfi
IP%IRMKANT.BITNET at VMA.CC.CMU.EDU
Wed Feb 6 12:24:05 EST 1991
The following technical report is now available. The paper
has been submitted to ICGA-91.
Send request to stiva at irmkant.Bitnet
e-mail comments and related references are appreciated
AUTO-TEACHING:
NETWORKS THAT DEVELOP THEIR OWN TEACHING INPUT
Stefano Nolfi Domenico Parisi
Institute of Psychology
CNR - Rome
E-mail: stiva at irmkant.Bitnet
ABSTRACT
Back-propagation learning (Rumelhart, Hinton and
Williams, 1986) is a useful research tool but it has a
number of undesiderable features such as having the
experimenter decide from outside what should be
learned. We describe a number of simulations of
neural networks that internally generate their own
teaching input. The networks generate the teaching
input by trasforming the network input through
connection weights that are evolved using a form of
genetic algorithm. What results is an innate (evolved)
capacity not to behave efficiently in an environment
but to learn to behave efficiently. The analysis of
what these networks evolve to learn shows some
interesting results.
references
Rumelhart, D.E., Hinton G.E., and Williams, R.J. (1986).
Learning internal representations by error propagation. In
D.E. Rumelhart, and J.L. McClelland, (eds.), Parallel
Distributed Processing. Vol.1: Foundations. Cambridge,
Mass.: MIT Press.
More information about the Connectionists
mailing list