Principal Components algorithms

Rick Granger granger at ics.uci.edu
Fri Nov 19 14:32:38 EST 1993


Terry, you point out that correlational or "Hebbian" algorithms identify the
first principal component, and that "sequential deflation" (i.e., successive
removal of the prior component) will then iteratively discover the subsequent
components.  Based on our bottom-up simulation studies some years ago of the
combined olfactory bulb - olfactory paleocortex system, we arrived at an
algorithm that we identified as performing this class of function.  In brief,
cortex processes feedforward inputs from bulb, cortical feedback to bulb
inhibits a portion of the input, the feedforward remainder from bulb is then
processed by cortex, and the process iterates, successively removing portions
of the input and then processing them.  We described our findings in a 1990
article (Ambros-Ingerson, Granger and Lynch, Science, 247: 1344-1348, 1990).

Moreover, in that paper we identified a superset of this class of functions,
which includes families of algorithms both for PCA and for the disparate
statistical function of hierarchical clustering.  Intuitively, in a network
that computes principal components, all the target cells (or any single target
cell) will respond to all inputs, and with correlational learning the weight
vectors coverge to the first principal component.  Consider instead the same
network but with a lateral inhibition (or "competitive" or "winner-take-all")
performance rule (see, e.g., Coultrip et al., Neural Networks, 5: 47-54).  
In this version, instead of all the cells acting in concert computing the
principal component, each individual "winning" cell will move to the mean of
just that subset of inputs that it wins on.  Then the response corresponds to
the statistical operation of clustering (as in the work of Von der Malsburg
'73, Grossberg '76, Zipser and Rumelhart '86, and many others).  Then an
operation of "sequential deflation" (in this case, successive removal of the
cluster via inhibitory feedback from cortex to bulb) identifies sub-clusters
(and then sub-sub-clusters, etc.), iteratively descending a hierarchy that
successively approximates the inputs, performing the operation of hierarchical
clustering.

Thus these two operations, hierarchical clustering and principal components
analysis, fall out as special cases of the same general class of "successive
removal" (or "sequential deflation") algorithms.  A formal characterization of
this finding appears in the 1990 Science paper.

(We have hypothesized that an operation of this kind may be performed by the
olfactory bulb-cortex system, and have tested some physiological and behavioral
predictions from the model: e.g., McCollum et al., J.Cog.Neurosci., 3: 293-299,
1991; Granger et al., Psychol.Sci., 2: 116-118, 1991.  This and related work is
reviewed in Gluck and Granger, Annual Review of Neurosci., 16: 667-706, 1993.
Anyone interested in reprints is welcome to send me a mail or email request.)

- Rick Granger
  Center for the Neurobiology of Learning and Memory
  University of California
  Irvine, California 92717

  granger at ics.uci.edu


More information about the Connectionists mailing list