PCA bibliography

Gary Cottrell gary at cs.ucsd.edu
Sat Nov 20 15:07:23 EST 1993


				   
RE:
>P.S.: Several people have commented to me that the way I phrased the
>hypothesis seems to imply the use of time-sequential deflation.  In other
>words, it sounds as if the first eigenvector must be found and removed from
>the data, before the second is found.  Most algorithms do not do this, and
>instead deflate the first learned component while it is being learned.  Thus
>learning of all components continues simultaneously "in parallel". 

In fact, that's how it works also for the "Category 2" systems
mentioned in your travelogue of methods. Straight LMS with hidden
units will learn the principal components at different rates, with the
highest rate on the first, then the second, etc., up to the number of
hidden units. Of course, these systems only span the principal
subspace rather than learning the pc's directly, but I find that an
advantage. (See Baldi & Hornik 1989, and Cottrell & Munro, SPIE88
paper).

Also, if you add more hidden layers to a nonlinear system, as in DeMers
& Cottrell (93), you can learn better representations, in the sense that
you can find the actual dimensionality of the system you are modeling,
with respect to your error criterion. So for example, a helix in 3 space
will be found to be one dimensional instead of 3, data from 3.5D Mackey
Glass will be found to be either 3 or 4 dimensional depending on your
reconstruction fidelity required. We don't know how to do a half a hidden
unit yet, though!

Gary Cottrell 619-534-6640
Reception: 619-534-6005 FAX: 619-534-7029 	"Only connect"
Computer Science and Engineering 0114
University of California San Diego			-E.M. Forster
La Jolla, Ca. 92093
gary at cs.ucsd.edu (INTERNET)
gcottrell at ucsd.edu (BITNET, almost anything)
..!uunet!ucsd!gcottrell (UUCP)


References:

Baldi, P. and Hornik, K., (1989) Neural Networks and 
Principal Component Analysis: Learning from Examples 
without Local Minima, Neural Networks 2, 53--58.

Cottrell, G.W. and Munro, P. (1988) Principal components analysis
of images via back propagation. Invited paper in Proceedings of
the Society of Photo-Optical Instrumentation Engineers,
Cambridge, MA.
Available from erica at cs.ucsd.edu

DeMers, D. & Cottrell, G.W. (1993)
Nonlinear dimensionality reduction.
In Hanson, Cowan & Giles (Eds.),
Advances in neural information processing systems 5, pp. 580-587,
San Mateo, CA: Morgan Kaufmann.
Available on neuroprose as demers.nips92-nldr.ps.Z.


More information about the Connectionists mailing list