new papers (Natural gradient learning, Blind source separation, etc)

Shunichi Amari amari at zoo.riken.go.jp
Sun Apr 13 23:52:07 EDT 1997


The following three papers are now available from my home page:
http://www.bip.riken.go.jp/irl/amari/amari.html

There are many other recent papers to be publishes in the same home page.
There are some other joint papers in the home page of Prof. Cichocki.
I am very bad at maintaining my home page, and I have renewed it.

**********************
1. Natural Gradient Works Efficiently in Learning
     ------submitted to Neural Computation for possible publication

abstract
When a parameter space has a certain underlying structure, the ordinary
gradient of a function does not represent its steepest 
direction but the natural
gradient does. Information geometry is used for calculating the natural
gradients in the parameter space of perceptrons, the space of 
matrices (for blind source separation) and the space of
linear dynamical systems (for blind source deconvolution). The dynamical
behavior of natural gradient on-line learning is 
analyzed and is proved to be Fisher efficient,
implying that it has asymptotically the same performance as the optimal
batch estimation of parameters. This suggests that the plateau phenomenon
which appears in the backpropagation learning algorithm of multilayer 
perceptrons might disappear or might be not so serious
when the natural gradient is used. 
An adaptive method of updating the learning rate is proposed and analyzed.

**********************
title
2. STABILITY ANALYSIS OF ADAPTIVE BLIND SOURCE SEPARATION
        -------accepted for publication in Neural Networks

abstract
Recently a number of adaptive learning algorithms have been proposed 
for blind source separation.  Although the underlying principles and approaches
 are different, most of them have very similar forms.  Two important
issues have remained to be elucidated further: the statistical
efficiency and the stability of learning algorithms.  The present
letter  analyzes a general form of statistically efficient algorithm and
give a necessary and sufficient condition for the separating
solution to be a stable equilibrium of a general learning algorithm.
Moreover, when the separating solution is unstable, a simple method
is given for stabilizing the separating solution by modifying the
algorithm. 

*************************
title
3. Superefficiency in Blind Source Separation
        ----------submitted to IEEE Tr. on Signal Processing 
abstract
Blind source separation extracts independent component signals from
their mixtures without knowing the mixing
coefficients nor the probability distributions of source signals. It is
known that some algorithms work surprisingly well. The present
paper elucidates the superefficiency of algorithms based on the
statistical analysis. It is in 
general known from the asymptotic theory of statistical analysis
that the covariance of any
two extracted independent signals converges to $0$ in the order of $1/t$ in 
the case of statistical estimation by using $t$ examples.
In the case of on-line learning, the theory of on-line dynamics
shows that the covariances converge to $0$ in the order of $\eta$
when the learning rate $\eta$ is fixed to be a small constant.

In contrast with the above general properties, the surprising
superefficiency holds in blind source
separation under a certain conditions. The superefficiency implies
that the covariance decreases in the order of $1/t^2$ or of $\eta ^2$. The
present paper uses the natural gradient learning algorithm
and the method of estimating functions to obtain the superefficient 
procedures for both estimation and on-line learning. 
The superefficiency does not imply that the error variances
of the extracted signals decrease in the order of $1/t^2$ or $\eta ^2$,
but implies that their covariances do.

 










More information about the Connectionists mailing list