TR announcement

Javier R. Movellan movellan at ergo.ucsd.edu
Mon Apr 14 17:08:45 EDT 1997



The following technical report is available online at 
http://cogsci.ucsd.edu  (follow links to Tech Reports & Software )
Physical copies are also available (see the site for information).



               A Learning Theorem for Networks 
              at Detailed Stochastic Equilibrium. 
            
                    Javier R. Movellan
               Department of Cognitive Science
              University of California San Diego


  The paper studies a stochastic extension of continuous
  recurrent neural networks and analyzes gradient descent learning
  rules to train their equilibrium solutions.  A theorem is given that
  specifies sufficient conditions for the gradient descent learning
  rules to be local covariance statistics between two random
  variables: 1) an evaluator which is the same for all the network
  parameters, and 2) a system variable which is independent of the
  learning objective.  The generality of the theorem suggests that
  instead of suppressing noise present in physical devices, a natural
  alternative is to use it to simplify the credit assignment problem.
  In deterministic networks credit assignment requires an evaluation
  signal which is different for each node in the
  network. Surprisingly, when noise is not suppressed, all is needed
  is an evaluator which is the same for the entire network, and a
  local Hebbian signal. This modularization of signals greatly
  simplifies hardware and software implementations. The paper shows
  how the theorem applies to four different learning objectives which
  span supervised, reinforcement and unsupervised problems: 1)
  regression, 2) density estimation, 3) risk minimization, 4)
  information maximization.  Simulations, implementation issues and
  implications for computational neuroscience are discussed.  




More information about the Connectionists mailing list