NIPS*97 workshop on NNs related to graphical models?

Karl Pfleger kpfleger at cs.stanford.edu
Wed Apr 16 05:49:49 EDT 1997


Would you like to see a NIPS*97 workshop on "Neural Network Models Related
to Graphical Probabilistic Models"?

Michael Jordan recently sent out the NIPS*97 Call for Post Conference
Workshop Proposals. I don't feel qualified to organize and run a workshop on
this topic myself, but I believe it is a topic of great interest to many
people. Thus, I'm suggseting it here in the hopes that someone who is
qualified might be inspired to organize such a workshop and draw up a
proposal.

If you would be interested only in attending or participating in such a
workshop, send me a quick note anyway (or even an abstract of a potential
submission) and I'll collect the responses to pass on to anyone who does
volunteer to organize.

Evidently, Mike Jordan and David Heckerman held a similar and extremely
popular workshop 2 years ago at NIPS. However, opnions from a number of
people, including Jordan, suggest that there is probably still sufficient
interest for another such workshop.

----------------------------------------------------------------------------
NIPS Workshop idea:

  Neural Network Models Related to Graphical Probabilistic Models

Graphical probabilistic models are currently a hot topic, especially
Bayesian/Belief Networks, particularly in the AI communities. But there
are neural network models that are intimiately related to some of these
models, and can be used in similar ways. For example, Boltzmann machines are
essentially neural-net versions of Markov Networks, with properties closely
related to probabilistically explicit Markov Networks and to Bayes nets.

Extended example--Some parallels between BMs and BNs:
  - Both represent a joint prob. distribution over a set of random variables.
  - In both network structure represents conditional independence assumptions
    amongst the variables, whether by d-separation or locally Markov
    properties. Based on Pearl (1988) neither is uniformly superior.
  - In both, exact prob. inference is possible but intractable, in general.
  - In both one can do Monte Carlo approx. inference. In both one can use
    Gibbs sampling, and this is exactly what BM settling corresponds to.
  - Both can be used to learn a joint distribution over visible units.
  - Both can use hidden variables.
  - In both you can do search over network structure to learn that as well,
    though this is almost completely unexplored for BMs.
  - Both can represent any joint distribution (the BM may need hidden units).
  - There are even mechanisms for converting from one to the other.
There isn't much literature that draws this parallel clearly. Neal (1991a)
comes closest. Probabilistically explicit Markov Network models are used in
vision and graphics, and also in physics communities, and also share the
above properties.

There are certainly many other such parallels. Particularly salient are
those involving variations on BMs and BNs, as investigated by people like
Radford Neal.

Integrative views like this help increase everyone's understanding.
(Obviously, the workshop is not meant to be a battleground for the
formalisms!) There are also plenty of research issues here and plenty of
suggestions that arise from understanding the parallels.
----------------------------------------------------------------------------

-Karl

----------------------------------------------------------------------------
Karl Pfleger   kpfleger at cs.stanford.edu   http://www.stanford.edu/~kpfleger/
----------------------------------------------------------------------------



More information about the Connectionists mailing list