paper announcement

Gert Cauwenberghs gert at cco.caltech.edu
Fri Jan 15 17:03:06 EST 1993


A Fast Stochastic Error-Descent Algorithm for Supervised Learning 
and Optimization

To appear in the NIPS 5 proceedings (Morgan Kauffman, 1993).

Gert Cauwenberghs
California Institute of Technology
Mail-Code 128-95
Pasadena, CA 91125
E-mail: gert at cco.caltech.edu

Abstract

A parallel stochastic algorithm is investigated for error-descent learning and
optimization in deterministic networks of arbitrary topology.  No {\em
explicit} information about internal network structure is needed.  The method
is based on the model-free distributed learning mechanism of Dembo and
Kailath.  A modified parameter update rule is proposed by which each
individual parameter vector perturbation contributes a decrease in error.  A
substantially faster learning speed is hence allowed.  Furthermore, the
modified algorithm supports learning time-varying features in dynamical
networks.  We analyze the convergence and scaling properties of the algorithm,
and present simulation results for dynamic trajectory learning in recurrent
networks.

Now available in the neuroprose archive:
  archive.cis.ohio-state.edu (128.146.8.52)
   pub/neuroprose directory
under the file name
  cauwenberghs.nips92.ps.Z
(compressed PostScript).



More information about the Connectionists mailing list