PCA bibliography

Terence D. Sanger tds at ai.mit.edu
Fri Nov 19 14:59:30 EST 1993


Dear Connectionists,

Since I sent out the offer to supply a PCA bibliography, I have received so
many requests that I now realize I should have included it with the
original mailing!  My mistake.  A file called "pca.bib" is now available
via anonymous ftp from the same site (instructions below).  This file is in
a not-very-clean BibTex format.  I won't even pretend that it is a complete
bibliography, since many people are currently working in this field and I
don't yet have all the most recent reprints.  If I've missed anyone or
there are mistakes, please send me some email and I'll update the
bibliography for everyone.

Thanks,
			Terry Sanger


Instructions for retrieving bibliography database:

ftp ftp.ai.mit.edu
login: anonymous
password: yourname at yoursite
cd pub/sanger-papers
get pca.bib
quit



P.S.: Several people have commented to me that the way I phrased the
hypothesis seems to imply the use of time-sequential deflation.  In other
words, it sounds as if the first eigenvector must be found and removed from
the data, before the second is found.  Most algorithms do not do this, and
instead deflate the first learned component while it is being learned.  Thus
learning of all components continues simultaneously "in parallel".  I meant
to include this case, but I could not think of any succinct way to say it!
Technically, it is not very different, since most convergence proofs assume
sequential learning of the outputs.  But in practice, algorithms which
learn all outputs in parallel seem to perform faster than those that learn
one output at a time.  I have certainly found this to be true for GHA, and
people have mentioned that it holds true for other algorithms as well.


More information about the Connectionists mailing list