Seminar abstract: The Sanguine Algorithm

Gary Cottrell gary at cs.UCSD.EDU
Fri Oct 25 21:59:28 EDT 1991





                                       SEMINAR

                New approaches to learning in Connectionist Networks

                                Garrison W. Cottrell
                                  Richard K. Belew
                          Institute for Neural Declamation
                Condominium Community College of Southern California


               Previous approaches to learning in recurrent networks  often
          involve  batch  learning: A large amount of effort is expended in
          deciding which way to move in weight space, then a little step is
          taken.  We propose a new algorithm for learning in large networks
          which is orders of magnitude more efficient than batch  learning.
          Based  on the realization that many nearby points in weight space
          are worse  than  where  we  are  now,  we  propose  the  sanguine
          algorithm.   The basic idea is to become more happy with where we
          are, rather than going to all the  work  of  moving.   Hence  the
          approach  is  quite  simple:  Randomly  sample  a nearby point in
          weight space.  Compute the error functional based on that  point.
          If  it  is  better than the current point, repeat until we find a
          nearby point that is worse.  Now, here's the real trick: Once  we
          find  a  point  worse off than where we are now, we stay where we
          are and increment a "happiness function".   That  is,  we  search
          until  we  find  a  place  that  we  can "look down on" in weight
          space[1].

               Now, in order to remain happy with where we are may  involve
          a certain amount of minor work to keep this point in weight space
          looking good.  For example, we could change the error  functional
          until  this  point  looks  better than most other points we find.
          Towards this end,  we  can  apply  recent  techniques  (Nowlan  &
          Hinton, 1991) to make the error functional soft and flabby.  Then
          we can stretch the error any way we like.  This approach can also
          be extended to replace computationally expensive "weight-sharing"
          techniques.  If we make the weights soft and flabby, then lifting
          them  becomes much easier since part of the weight always remains
          on the ground, and sharing the burden of  large  weights  becomes
          unnecessary.  Note that this can be done completely locally.

               We have applied this novel learning procedure to the problem
          of time series prediction.  Using the Mackey-Glass equations with
          dimension 3.5, we give the network values at 0,  6,  12,  and  18
          time units back in time to predict the value of the time series 6
          time units into the  future.  Using  the  Sanguine  Algorithm,  a
          network  with  only  two hidden units rapidly converges to a soft
          error functional.  Of course, the network has  no  idea  of  what
          value will come next; however, the happiness function shows it is
          quite blissful in its ignorance.  We propose that this  technique
          will   have   wide   application   in  Republican  approaches  to
          government.
          ____________________
             [1]Thus the pet name for our algorithm is the "Nyah Nyah Algo-
          rithm".


More information about the Connectionists mailing list