No subject

Ray White white at teetot.acusd.edu
Fri Nov 8 13:02:34 EST 1991


In reply to:

> Manoel Fernando Tenorio <tenorio at ecn.purdue.edu>
> --------
> How is that different from Sanger's principle component algorithm? (NIPS,90).
> --ft.
> Pls send answer to the net.

(Where "that" refers to my 'Competitive Hebbian Learning', to be published
in Neural Networks, 1992, in response to Yoshio Yamamoto.)

The Sanger paper that I think of in this connection is the 'Neural Networks '
paper, T. Sanger (1989) Optimal unsupervised learning..., Neural Networks, 2,
459-473. There is certainly some relation, in that each is a modification
of Hebbian learning. And I would think that one could also apply Sanger's
algorithm to Yoshio Yamamoto's problem - training hidden units to ignore
input components which are uncorrelated with the desired output.

As I understand it, Sanger's 'Generalized Hebbian learning' trains units
to find successively, the principle components of the input, starting with
the most important and working on down, depending on the number of units
you use.

Competitive Hebbian Learning, on the other hand, is a
simpler algorithm which trains units to learn simultaneously (approximately)
orthogonal linear combinations of the components of the input.  With this
algorithm, one does not get the princple components nicely separated out,
but one does get trained units of roughly equal importance.

For those interested there is a shorter preliminary version of the
paper in the Jordan Pollack's neuroprose archive, where it is
called white.comp-hebb.ps.Z.  Unfortunately that version does not include
the Boolean application which Yoshio Yamamoto's query suggested.


   Ray White	(white at teetot.acusd.edu)
   Depts. of Physics & Computer Science
   University of San Diego
   


More information about the Connectionists mailing list