Papers Now Available

Mark Girolami giro-ci0 at wpmail.paisley.ac.uk
Thu May 31 04:15:10 EDT 2001


Dear Connectionists,

The following papers are now available for download from
http://cis.paisley.ac.uk/giro-ci0/

1) Orthogonal Series Density Estimation and the Kernel Eigenvalue
Problem

Mark Girolami

To Appear : Neural Computation

Abstract
Kernel principal component analysis has been introduced as a method of
extracting a set of orthonormal nonlinear features from multi-variate
data and many impressive applications are being reported within the
literature. This paper presents the view that the eigenvalue
decomposition of a kernel matrix can also provide the discrete
expansion coefficients required for a non-parametric orthogonal series
density estimator. In addition to providing novel insights into
non-parametric density estimation this paper provides an intuitively
appealing interpretation for the nonlinear features extracted from
data using kernel principal component analysis.


2) A Variational Method for Learning Sparse and Overcomplete
Representations.

Mark Girolami

To Appear : Neural Computation

Abstract
An expectation maximisation algorithm for learning sparse and
overcomplete data representations is presented. The proposed algorithm
exploits a variational approximation to a range of heavy tailed
distributions whose limit is the Laplacian. A rigorous lower-bound on
the sparse prior distribution is derived which enables the analytic
marginalisation of a lower-bound on the data likelihood. This
lower-bound enables the development of an expectation maximisation
algorithm for learning the overcomplete basis vectors and inferring
the most probable basis coefficients.

3) Mercer Kernel Based Clustering in Feature Space

Mark Girolami

To Appear : IEEE Transaction on Neural Networks

Abstract
This paper presents a method for both the unsupervised partitioning of
a sample of dat and the estimation of the possible number of inherent
clusters which generate the data. This work exploits the notion that
performing a nonlinear data transformation into some high dimensional
feature space increases the probability of the linear separability of
the patterns within the transformed space and therefore simplifies the
associated data structure. It is shown that the eigenvectors of a
kernel matrix which defines the implicit mapping provides a means to
estimate the number of clusters inherent within the data and a
computationally simple iterative procedure is presented for the
subsequent feature space partitioning of the data.



Legal disclaimer
--------------------------

The information transmitted is the property of the University of
Paisley and is intended only for the person or entity to which it is
addressed and may contain confidential and/or privileged material.
Statements and opinions expressed in this e-mail may not represent
those of the company.  Any review, retransmission, dissemination and
other use of, or taking of any action in reliance upon, this
information by persons or entities other than the intended recipient
is prohibited.  If you received this in error, please contact the
sender immediately and delete the material from any computer.

--------------------------




More information about the Connectionists mailing list