PREPRINT: Contrastive Hebbian

Javier Movellan jm2z+ at ANDREW.CMU.EDU
Thu Apr 26 19:05:26 EDT 1990


This preprint has been placed in the account kindly provided by Ohio State.

CONTRASTIVE HEBBIAN LEARNING IN INTERACTIVE NETWORKS

Javier R. Movellan
Department of Psychology
Carnegie Mellon University
Pittsburgh, Pa 15213
email: jm2z+ at andrew.cmu.edu
Submitted to Neural Computation

Interactive networks, as defined by Hopfield (1984), Grossberg (1978),
and McClelland & Rumelhart(1981)  may  have an advantage over
feed-forward architectures because of  their completion properties, and
flexibility in the treatment of units as inputs or outputs. Ackley,
Hinton and Sejnowski (1985) derived a learning rule to train Boltzmann
machines, which are discrete, interactive networks.  Unfortunately,
because of the discrete stochasticity of its units,  Boltzmann learning
is intolerably slow. 

Peterson and Anderson (1987) showed that Boltzmann machines with large
number of units can be approximated with deterministic networks whose
logistic  activations represent the average  activation of  discrete
Boltzmann units (Mean Field Approximation). Under these conditions a
learning rule that I  call Contrastive Hebbian Learning (CHL) was shown
to be a good approximation to the Boltzmann weight update rule and to
achieve learning speeds comparable to backpropagation. Hinton (1989)
showed that for Mean Field networks, CHL is at least a first order
approximation to gradient descent on an error function. The purpose in
this paper is to show that CHL  works with any interactive network with
bounded, continuous activation functions and symmetric weights. The
approach taken does not  presume the existence of Boltzmann machines
whose behavior is approximated with mean field networks. It is  also
shown that CHL performs gradient descent on a contrastive function of
the same form investigated by  Hinton (1989) The paper is divided in two
sections and one appendix. In Section 1 I study the dynamics of the
activations in interactive networks. Section 2 shows how to  modify the
weights for the stable states of the network to reproduce desired
patterns of activations. The appendix contains mathematical details, and
some comments on how to implement Contrastive Hebbian Learning in
Interactive Networks.

The format is Latex. Here are the instructions to get the file:
unix> ftp cheops.cis.ohio-state.edu
Name:anonymous
Password:neuron
ftp> cd pub/neuroprose
ftp> get Movellan.CHL.LateX





More information about the Connectionists mailing list