preprints available

Sebastian Seung seung at physics.bell-labs.com
Mon Jan 12 15:52:43 EST 1998


The following preprints are now available at 
http://www.bell-labs.com/user/seung

----------------------------------------------------------------------

	 Learning continuous attractors in recurrent networks
			     H. S. Seung

One approach to invariant object recognition employs a recurrent
neural network as an associative memory.  In the standard depiction of
the network's state space, memories of objects are stored as
attractive fixed points of the dynamics.  I argue for a modification
of this picture: if an object has a continuous family of
instantiations, it should be represented by a continuous attractor.
This idea is illustrated with a network that learns to complete
patterns.  To perform the task of filling in missing information, the
network develops a continuous attractor that models the manifold from
which the patterns are drawn.  From a statistical viewpoint, the
pattern completion task allows a formulation of unsupervised learning
in terms of regression rather than density estimation.
http://www.bell-labs.com/user/seung/papers/continuous.ps.gz
[To appear in Adv. Neural Info. Proc. Syst. 10 (1998)]

----------------------------------------------------------------------

  Minimax and Hamiltonian dynamics of excitatory-inhibitory networks
  H. S. Seung, T. J. Richardson, J. C. Lagarias, and J. J. Hopfield

A Lyapunov function for excitatory-inhibitory networks is constructed.
The construction assumes symmetric interactions within excitatory and
inhibitory populations of neurons, and antisymmetric interactions
between populations.  The Lyapunov function yields sufficient
conditions for the global asymptotic stability of fixed points.  If
these conditions are violated, limit cycles may be stable.  The
relations of the Lyapunov function to optimization theory and
classical mechanics are revealed by minimax and dissipative
Hamiltonian forms of the network dynamics.
http://www.bell-labs.com/user/seung/papers/minimax.ps.gz
[To appear in Adv. Neural Info. Proc. Syst. 10 (1998)]

----------------------------------------------------------------------

     Learning generative models with the up-propagation algorithm
		       J.-H. Oh and H. S. Seung

Up-propagation is an algorithm for inverting and learning neural
network generative models.  Sensory input is processed by
inverting a model that generates patterns from hidden variables using
top-down connections.  The inversion process is iterative, utilizing a
negative feedback loop that depends on an error signal propagated by
bottom-up connections.  The error signal is also used to learn the
generative model from examples.  The algorithm is benchmarked against
principal component analysis in experiments on images of handwritten
digits.
http://www.bell-labs.com/user/seung/papers/up-prop.ps.gz
[To appear in Adv. Neural Info. Proc. Syst. 10 (1998)]

----------------------------------------------------------------------

		 The rectified Gaussian distribution
	       N. D. Socci, D. D. Lee, and H. S. Seung

A simple but powerful modification of the standard Gaussian
distribution is studied.  The variables of the rectified Gaussian are
constrained to be nonnegative, enabling the use of nonconvex energy
functions.  Two multimodal examples, the competitive and cooperative
distributions, illustrate the representational power of the rectified
Gaussian.  Since the cooperative distribution can represent the
translations of a pattern, it demonstrates the potential of the
rectified Gaussian for modeling pattern manifolds.
http://www.bell-labs.com/user/seung/papers/rg.ps.gz
[To appear in Adv. Neural Info. Proc. Syst. 10 (1998)]

----------------------------------------------------------------------
       
     Pattern analysis and synthesis in attractor neural networks
			     H. S. Seung

The representation of hidden variable models by attractor neural
networks is studied.  Memories are stored in a dynamical attractor
that is a continuous manifold of fixed points, as illustrated by
linear and nonlinear networks with hidden neurons.  Pattern analysis
and synthesis are forms of pattern completion by recall of a stored
memory.  Analysis and synthesis in the linear network are performed by
bottom-up and top-down connections.  In the nonlinear network, the
analysis computation additionally requires rectification nonlinearity
and inner product inhibition between hidden neurons.
http://www.bell-labs.com/user/seung/papers/pattern.ps.gz
[In Theoretical Aspects of Neural Computation: A Multidisciplinary
Perspective, Proceedings of TANC'97.  Springer-Verlag (1997)]


More information about the Connectionists mailing list