Preprints Available

Zoubin Ghahramani Zoubin at gatsby.ucl.ac.uk
Thu Jan 20 13:28:17 EST 2000


The following 8 preprints from the Gatsby Computational Neuroscience
Unit are now available on the web. These papers will appear in the
Proceedings of NIPS 99 (Advances in Neural Information Processing
Systems 12, edited by S. A. Solla, T. K. Leen, and K.-R. Müller, MIT
Press).
				Zoubin Ghahramani
				Gatsby Computational Neuroscience Unit
				http://www.gatsby.ucl.ac.uk
				University College London

----------------------------------------------------------------------
Author:	Hagai Attias
Title:	A Variational Bayesian Framework for Graphical Models
URL:	http://www.gatsby.ucl.ac.uk/~hagai/nips99vb.ps

----------------------------------------------------------------------
Author: Hagai Attias
Title:	Independent Factor Analysis with Temporally Structured Sources
URL:	http://www.gatsby.ucl.ac.uk/~hagai/nips99dfa.ps

----------------------------------------------------------------------
Author: Zoubin Ghahramani and Matthew J Beal
Title:	Variational Inference for Bayesian Mixtures of Factor Analysers
URL:	http://www.gatsby.ucl.ac.uk/~zoubin/papers/nips99.ps.gz
	http://www.gatsby.ucl.ac.uk/~zoubin/papers/nips99.pdf

----------------------------------------------------------------------
Author: Geoffrey E. Hinton and Andrew D. Brown
Title:	Spiking Boltzmann Machines
URL:	http://www.gatsby.ucl.ac.uk/~andy/papers/nips99_sbm.ps.gz

----------------------------------------------------------------------
Author: Geoffrey E. Hinton, Zoubin Ghahramani and Yee Whye Teh
Title:	Learning to Parse Images
URL:	http://www.gatsby.ucl.ac.uk/~ywteh/crednets

----------------------------------------------------------------------
Author: Zhaoping Li
Title:	Can V1 mechanisms account for figure-ground and medial axis effects?
URL:	http://www.gatsby.ucl.ac.uk/~zhaoping/prints/nips99abstract.html

----------------------------------------------------------------------
Author: Sam Roweis
Title:  Constrained Hidden Markov Models
URL:	http://www.gatsby.ucl.ac.uk/~roweis/papers/sohmm.ps.gz

----------------------------------------------------------------------
Author:	Brian Sallans
Title:  Learning Factored Representations for Partially Observable
        Markov Decision Processes
URL:	PS:        http://www.gatsby.ucl.ac.uk/~sallans/papers/nips99.ps
	gzip'd PS: http://www.gatsby.ucl.ac.uk/~sallans/papers/nips99.ps.gz
	PDF:       http://www.gatsby.ucl.ac.uk/~sallans/papers/nips99.pdf

==================================ABSTRACTS:
==================================Author: Hagai Attias

Title:	A Variational Bayesian Framework for Graphical Models

URL:	http://www.gatsby.ucl.ac.uk/~hagai/nips99vb.ps
----------------------------------------------------------------------
Author: Hagai Attias

Title:	Independent Factor Analysis with Temporally Structured Sources

URL:	http://www.gatsby.ucl.ac.uk/~hagai/nips99dfa.ps

----------------------------------------------------------------------
Authors: Zoubin Ghahramani and Matthew J Beal

Title:	Variational Inference for Bayesian Mixtures of Factor Analysers

Abstract:

We present an algorithm that infers the model structure of a mixture
of factor analysers using an efficient and deterministic variational
approximation to full Bayesian integration over model parameters. This
procedure can automatically determine the optimal number of components
and the local dimensionality of each component (i.e.\ the number of
factors in each factor analyser). Alternatively it can be used to
infer posterior distributions over number of components and
dimensionalities.  Since all parameters are integrated out the method
is not prone to overfitting.  Using a stochastic procedure for
adding components it is possible to perform the variational
optimisation incrementally and to avoid local maxima. Results show
that the method works very well in practice and correctly infers the
number and dimensionality of nontrivial synthetic examples.

By importance sampling from the variational approximation we show how
to obtain unbiased estimates of the true evidence, the exact
predictive density, and the KL divergence between the variational
posterior and the true posterior, not only in this model but for
variational approximations in general.

URL:	http://www.gatsby.ucl.ac.uk/~zoubin/papers/nips99.ps.gz
	http://www.gatsby.ucl.ac.uk/~zoubin/papers/nips99.pdf
	
----------------------------------------------------------------------
Authors: Geoffrey E. Hinton and Andrew D. Brown

Title: Spiking Boltzmann Machines

Abstract:

We first show how to represent sharp posterior probability
distributions using real valued
coefficients on broadly-tuned basis functions.  Then we show how the precise times of
spikes can be used to convey the real-valued coefficients on the basis functions quickly
and accurately. Finally we describe a simple simulation in which spiking neurons learn to
model an image sequence by fitting a dynamic generative model.

URL:	http://www.gatsby.ucl.ac.uk/~andy/papers/nips99_sbm.ps.gz
----------------------------------------------------------------------
Authors:  Geoffrey E. Hinton, Zoubin Ghahramani and Yee Whye Teh

Title:	  Learning to Parse Images

Abstract:

We describe a class of probabilistic models that we call credibility
networks.  Using parse trees as internal representations of images,
credibility networks are able to perform segmentation and recognition
simultaneously, removing the need for ad hoc segmentation heuristics. 
Promising results in the problem of segmenting handwritten digits were
obtained.

URL:	http://www.gatsby.ucl.ac.uk/~ywteh/crednets

----------------------------------------------------------------------
Author: Zhaoping Li

Title: Can V1 mechanisms account for figure-ground and medial axis
effects?

Abstract:

When a visual image consists of a figure against a background, V1 cells
are physiologically observed to give higher responses to image regions
corresponding to the figure relative to
their responses to the background. The medial axis of the figure also
induces relatively higher responses compared to responses to other
locations in the figure (except for the
boundary between the figure and the background). Since the receptive
fields of V1 cells are very small compared with the global scale of the
figure-ground and medial axis effects, it
has been suggested that these effects may be caused by feedback from
higher visual areas. I show how these effects can be accounted for by V1
mechanisms when the size of the
figure is small or is of a certain scale. They are a manifestation of the
processes of pre-attentive segmentation which detect and highlight the
boundaries between homogeneous
image regions.

URL:	http://www.gatsby.ucl.ac.uk/~zhaoping/prints/nips99abstract.html

----------------------------------------------------------------------
Author: Sam Roweis

Title:  Constrained Hidden Markov Models

Abstract:

By thinking of each state in a hidden Markov model as corresponding to
some spatial region of a fictitious _topology space_ it is
possible to naturally define neighbouring states as those which are
connected in that space. The transition matrix can then be constrained to
allow transitions only between neighbours; this means that all valid state
sequences correspond to connected paths in the topology space.
I show how such _constrained HMMs_ can learn to discover underlying
structure in complex sequences of high dimensional data, and apply them
to the problem of recovering mouth movements from acoustics
in continuous speech.

URL:	http://www.gatsby.ucl.ac.uk/~roweis/papers/sohmm.ps.gz
----------------------------------------------------------------------
Author:  Brian Sallans
         University of Toronto and Gatsby Unit, UCL
         sallans at cs.toronto.edu

Title:  Learning Factored Representations for Partially Observable
        Markov Decision Processes

Abstract:

The problem of reinforcement learning in a non-Markov
environment is explored using a dynamic Bayesian network, where
conditional independence assumptions between random variables are compactly
represented by network parameters.  The parameters are learned on-line, and
approximations are used to perform inference and to compute the optimal value
function.  The relative effects of inference and value function approximations
on the quality of the final policy are investigated, by learning to solve a
moderately difficult driving task.  The two value function approximations,
linear and quadratic, were found to perform similarly, but the quadratic
model was more sensitive to initialization.  Both performed below the level
of human performance on the task.  The dynamic Bayesian network performed
comparably to a model using a localist hidden state representation, while
requiring exponentially fewer parameters.

URL:  PS:        http://www.gatsby.ucl.ac.uk/~sallans/papers/nips99.ps
      gzip'd PS: http://www.gatsby.ucl.ac.uk/~sallans/papers/nips99.ps.gz
      PDF:       http://www.gatsby.ucl.ac.uk/~sallans/papers/nips99.pdf
----------------------------------------------------------------------


More information about the Connectionists mailing list