Optimizing a learning rule

Yoshua Bengio yoshua at psyche.mit.edu
Thu Jan 23 12:55:40 EST 1992


Hello,

Recently, Henrik Klagges broadcasted on the connectionists list 
results he obtained on optimizing synaptic learning rules,
citing our last year tech report [1] on this subject.
This report [1] did not contain any simulation results.
However, since then, we have been able to perform
several series of experiments, with interesting
results. Early results were presented last year
at Snowbird and more recent results will be presented
at the Conference on Optimality in Biological and Artificial
Networks, to be held in Dallas, TX, Feb.6-9. 
A preprint can be obtained from anonymous ftp at iros1.umontreal.ca
in directory pub/IRO/its/bengio.optim.ps.Z (compressed postscript file)

The title of the paper to be presented at Dallas is:

      On the optimization of a synaptic learning rule

by Samy Bengio, Yoshua Bengio, Jocelyn Cloutier, and Jan Gecsei.

Abstract:

This paper presents an original approach to neural modeling
based on the idea of tuning
synaptic learning rules with optimization methods.
This approach relies on the idea of considering the synaptic modification
rule as a parametric function which has {\it local} inputs, and is the
same is many neurons. Because the space
of learning algorithms is very large, we propose to use
biological knowledge about synaptic mechanisms, in order to design
the form of such rules. The optimization methods used for this search do
not have to be biologically plausible, although the net result of this
search may be a biologically plausible learning rule.

In the experiments described in this paper, local optimization method
(gradient descent) as well as global optimization method (simulated annealing)
were used to search for new learning rules.
Estimation of parameters of synaptic
modification rules consists of a joint global
optimization of the
rules themselves, as well as, of multiple networks that
learn to perform some tasks with these rules.

Experiments are described in order to assess the feasibility
of the proposed method for very simple tasks.
Experiments of classical conditioning for {\it Aplysia} yielded a rule
that allowed a network to reproduce five basic conditioning phenomena.
Experiments with two-dimentional categorization problems yielded a rule for a
network with a hidden layer that could be used to learn some simple
but non-linearly separable classification tasks.
The rule parameters were optimized for a set of classification tasks
and the generalization was tested successfully on a different set of tasks.


Previous work:

[1] Bengio Y. and Bengio S. (1990). Learning a synaptic learning rule. 
Technical Report #751. Computer Science Department. Universite de Montreal.

[2] Bengio Y., Bengio S., and Cloutier, J. (1991). Learning a synaptic learning 
	rule. IJCNN-91-Seattle.

Related work:

[3] Chalmers D.J. (1990). The evolution of learning: an experiment
in genetic connectionism. In: Connectionist Models: Proceedings
of the 1990 Summer School, pp. 81-90.




More information about the Connectionists mailing list