some papers

Nikos Vlassis vlassis at science.uva.nl
Mon May 7 10:42:40 EDT 2001


Dear Connectionists,

The following three papers have been accepted for publication and might
be of interest.

N. Vlassis, Y. Motomura, Ben Krose
Supervised Dimension Reduction of Intrinsically Low-dimensional Data
Neural Computation (to appear)
ftp://ftp.science.uva.nl/pub/computer-systems/aut-sys/reports/Vlassis01nc.ps.gz

Abstract: High-dimensional data generated by a system with limited
degrees of
freedom are often constrained in low-dimensional manifolds in the
original space. In this paper we investigate dimension reduction
methods for such intrinsically low-dimensional data through linear
projections that preserve the manifold structure of the data. For
intrinsically one-dimensional data this implies projecting to a curve
on the plane with as few intersections as possible. We are proposing a
supervised projection pursuit method which can be regarded as an
extension of the single-index model for nonparametric regression. We
show results from a toy and two robotic applications.

Keywords: dimension reduction, feature extraction, intrinsic
dimensionality, projection pursuit, simple curve, single-index model,
multiple-index model, appearance modeling, mobile robot localization.
----

N. Vlassis and Y. Motomura
Efficient Source Adaptivity in Independent Component Analysis
IEEE Trans. Neural Networks (to appear)
ftp://ftp.science.uva.nl/pub/computer-systems/aut-sys/reports/Vlassis01tnn.ps.gz

Abstract: A basic element in most ICA algorithms is the choice of a
model for the
score functions of the unknown sources. While this is usually based on
approximations, for large data sets it is possible to achieve `source
adaptivity' by directly estimating from the data the `true' score
functions of the sources. In this paper we describe an efficient scheme
for achieving this by extending the fast density estimation method
of Silverman (1982). We show with a real and a synthetic experiment
that our method can provide more accurate solutions than
state-of-the-art methods when optimization is carried out in the
vicinity of the global minimum of the contrast function.

Keywords: Independent component analysis, blind signal separation,
source
adaptivity, score function estimation.
----

N. Vlassis, A. Likas
A greedy EM algorithm for Gaussian mixture learning
Neural Processing Letters (to appear)
ftp://ftp.science.uva.nl/pub/computer-systems/aut-sys/reports/Vlassis01npl.ps.gz

Abstract: Learning a Gaussian mixture with a local algorithm like EM can
be
difficult because (i) the true number of mixing components is usually
unknown, (ii) there is no generally accepted method for parameter
initialization, and (iii) the algorithm can get stuck in one of the
many local maxima of the likelihood function. In this paper we propose
a greedy algorithm for learning a Gaussian mixture which tries to
overcome these limitations.  In particular, starting with a single
component and adding components sequentially until a maximum number
k, the algorithm is capable of achieving solutions superior to EM
with k components in terms of the likelihood of a test set.  The
algorithm is based on recent theoretical results on incremental mixture
density estimation, and uses a combination of global and local search
each time a new component is added to the mixture.  


-- 
http://www.science.uva.nl/~vlassis




More information about the Connectionists mailing list