nonlinear principal components analysis

ecm@skew2.kellogg.nwu.edu ecm at skew2.kellogg.nwu.edu
Tue Oct 8 15:16:16 EDT 1996


Technical Report Available

Some Theoretical Results on Nonlinear Principal Components Analysis

Edward C. Malthouse
Northwestern University
ecm at nwu.edu

Postscript file available via ftp from mkt2715.kellogg.nwu.edu in
pub/ecm/nlpca.ps


		      A B S T R A C T

Nonlinear principal components analysis (NLPCA) neural networks
are feedforward autoassociative networks with five layers.  The
third layer has fewer nodes than the input or output layers.  NLPCA
has been shown to give better solutions to several feature extraction
problems than existing methods, but very little is know about the
theoretical properties of this method or its estimates.  This paper
studies NLPCA.  It proposes a geometric interpretation by showing that
NLPCA fits a lower-dimensional curve or surface through the training
data.  The first three layers project observations onto the curve or
surface giving scores.  The last three layers define the curve or
surface.  The first three layers are a continuous function, which
I show has several implications:  NLPCA ``projections'' are
suboptimal producing larger approximation error, NLPCA is
unable to model curves and surfaces that intersect themselves, and
NLPCA cannot parameterize curves with parameterizations having
discontinuous jumps.  I establish results on the identification of
score values and discuss their implications on interpreting score
values.  I discuss the relationship between NLPCA and principal
curves and surfaces, another nonlinear feature extraction method.

Keywords: nonlinear principal components analysis, feature
extraction, data compression, principal curves, principal surfaces.




More information about the Connectionists mailing list