Convergence

David Horn HORN%TAUNIVM.BITNET at VMA.CC.CMU.EDU
Wed Feb 28 17:03:30 EST 1990


In-reply-to: Dave Scortese and Wayne Mesard

We have demonstrated how a convergent neural network of the
Hopfield type can turn into a system which displays an open-
ended motion in pattern-space (the space of all its input memories).
Its dynamical motion converges on a short-time scale, moving in
the direction of an attractor, but escapes it leading to a non-
convergent motion on a long time scale.  Adding pointers connecting
different memories we obtain a process which has some
resemblance to associative thinking.

It is interesting to note that such a non-convergent behavior
does not necessitate random neural activity. The way we made it work
was by introducing dynamical thresholds as new degrees of freedom.
The threshold is being changed as a function of the firing history of
the neuron to which it belongs (e.g. mimicking fatigue). This can
lead to the destabilization of the attractors of the neural network,
turning them into transients of its motion.

Refrences:
D. Horn and M. Usher, Neural Networks with Dynamical Thresholds,
Phys. Rev. A 40 (1989) 1036-1044;
Motion in the Space of Memory Patterns,IJCNN (Washington meeting
June 1989) I-61-66;
Excitatory-Inhibitory Networks with Dynamical Thresholds, preprint.


More information about the Connectionists mailing list