Summary: NIPS workshop on learning in graphical models

David Heckerman heckerma at microsoft.com
Tue Jan 16 22:34:47 EST 1996


Summary: NIPS 95 Workshop on 
Learning in Bayesian Networks and Other Graphical Models

We discussed the relationships between Bayesian networks, decomposable
models, Markov random fields, Boltzmann machines, Hidden Markov
models, stochastic grammars, and feedforward neural networks, exposing
complementary strengths and weaknesses in the various formalisms.  For
example, Bayesian networks are particularly strong in their focus on
explicit representations of probabilistic independencies (the arrows
in a belief network have a strong semantics in this regard), their
full use of Bayesian methods, and their focus on density estimation.
Neural networks are particularly strong in their ties to approximation
theory, and in their focus on predictive modeling in non-linear
classification and regression contexts.

Topics discussed included issues in optimization, including the use of
gradient-based methods and EM algorithms; issues in approximation,
including the use of mean field algorithms and stochastic sampling;
issues in representation, including exploration of the roles of
``hidden'' or ``latent'' variables in learning; search methods for
model selection and model averaging; and engineering issues.

A more detailed summary, as well as pointers to slides and related
papers can be found at

http://www.research.microsoft.com/research/nips95bn/




More information about the Connectionists mailing list