workshop announcement

Michael Jordan jordan at psyche.mit.edu
Tue Oct 3 18:22:02 EDT 1995


This is an announcement of a post-NIPS workshop on 
Learning in Bayesian Belief Networks and Other Graphical
Models.

Bayesian belief networks are probabilistic graphs that
have interesting relationships to neural networks.
Undirected belief networks are closely related to
Boltzmann machines.  Directed belief networks (the
more popular variety) are related to feedforward neural
networks, but have a stronger probabilistic semantics.

Many interesting probabilistic models, including HMM's,
Kalman filters, mixture models, factor analytic models,
etc., can be viewed as special cases of belief networks.

In the area of inference (i.e., the calculation of posterior 
probabilities of certain nodes given that other nodes are 
clamped), the research on belief networks is quite mature.
The inference algorithms provide a clean probabilistic framework 
for e.g., inverting a network, calculating posterior probabilities
of hidden nodes, calculating most probable configurations, etc.

In the area of learning, there have been interesting developments
in the area of structural learning (deciding which links and 
which nodes to include in the graph) and learning in the 
presence of hidden variables.

The organizing committee for the workshop includes:
Wray Buntine, Greg Cooper, Dan Geiger, David Heckerman,
Geoffrey Hinton, Mike Jordan, Steffen Lauritzen,
David Mackay, David Madigan, Radford Neal, Steve Omohundro,
Judea Pearl, Stuart Russell, Peter Spirtes, and Ross Shachter.
Many of these people will be giving presentations at the workshop.

A short bibliography follows for those who might like to read 
up on Bayesian belief networks in anticipation of the workshop.

Mike Jordan

------------------
Short Bibliography
------------------

The list below provides a few useful references, with an 
emphasis on recent review papers, tutorials, and textbooks.
The list is not meant to be comprehensive along any dimension...

Many additional pointers to the literature can be found on 
the Uncertainty in Artificial Intelligence homepage; see
http://www.auai.org.

If I had to pick two papers that I would most recommend for
someone wanting to get up to speed quickly on belief networks,
I'd recommend the Spiegelhalter, et al. paper and the Heckerman 
tutorial.

Mike

-----------------------
A good place to start to learn about the most popular 
algorithm for general inference in belief networks, as well
as some of the basics on learning:

   Spiegelhalter, D. J., Dawid, A. P., Lauritzen, 
   S. L., & Cowell, R. G. (1993).  Bayesian Analysis 
   in Expert Systems, {\em Statistical Science, 8}, 
   219-283.

If you want more details on the inference algorithm:

   Lauritzen, S. L., \& Spiegelhalter, D. J. (1988).
   Local computations with probabilities on graphical
   structures and their application to expert systems
   (with discussion).  {\em Journal of the Royal Statistical 
   Society B, 50}, 157-224.

A tutorial on the recent work on learning in belief networks:

   Heckerman, D. (1995).  A tutorial on learning Bayesian networks.
   [available through http://www.auai.org].

If you want more on learning:

   Buntine, W. (1994).  Operations for learning with graphical 
   models.  {\em Journal of Artificial Intelligence Research,
   2}, 159-225.  [available through http://www.auai.org].

A very readable general textbook on belief networks from a statistical 
perspective (focusing on ML estimation and model selection):

   Whittaker, J. (1990).  {\em Graphical Models in Applied 
   Multivariate Statistics}.  New York: John Wiley.

An introductory textbook:

   Neapolitan, E. (1990).  {\em Probabilistic Reasoning 
   in Expert Systems}.  New York: John Wiley.

The classical text on belief networks; emphasizes inference 
and AI issues:

   Pearl, J. (1988). {\em Probabilistic Reasoning in 
   Intelligent Systems: Networks of Plausible Inference}.
   San Mateo, CA: Morgan Kaufman.

A recent paper that unifies (almost) all of the extant
algorithms for inference in belief networks:

   Shachter, R. D., Anderson, S. K., \& Szolovits, P.
   (1994).  Global conditioning for probabilistic inference 
   in belief networks.  {\em Proceedings of the Uncertainty
   in Artificial Intelligence Conference}, 514-522.




More information about the Connectionists mailing list