paper availble: belief propagation in networks with loops

Yair Weiss yweiss at psyche.mit.edu
Thu Nov 13 13:12:16 EST 1997


The following paper on belief propagation in networks with loops is
available online via:

http://www-bcs.mit.edu/~yweiss/cbcl.ps.gz


This research will be presented at the NIPS*97 workshop on graphical
models. The workshop will also feature talks by P. Smyth and B. Frey
on a similar topic.

Comments are most welcome.

 Yair

--------------------------------------------------------------------------
Title: Belief Propagation and Revision in Networks with Loops 
Author: Yair Weiss
Reference: MIT AI Memo 1616, MIT CBCL Paper 155.

Abstract: 

Local belief propagation rules of the sort proposed by Pearl
(1988) are guaranteed to converge to the optimal beliefs for singly
connected networks. Recently, a number of researchers have empirically
demonstrated good performance of these same algorithms on networks
with loops, but a theoretical understanding of this performance has
yet to be achieved. Here we lay a foundation for an understanding of
belief propagation in networks with loops.  For networks with a single
loop, we derive an analytical relationship between the steady state
beliefs in the loopy network and the true posterior probability. Using
this relationship we show a category of networks for which the MAP
estimate obtained by belief update and by belief revision can be
proven to be optimal (although the beliefs will be incorrect). We show
how nodes can use local information in the messages they receive in
order to correct the steady state beliefs.  Furthermore we prove that
for all networks with a single loop, the MAP estimate obtained by
belief revision at convergence is guaranteed to give the globally
optimal sequence of states. The result is independent of the length of
the cycle and the size of the state space.  For networks with multiple
loops, we introduce the concept of a ``balanced network'' and show
simulation results comparing belief revision and update in such
networks. We show that the Turbo code structure is balanced and
present simulations on a toy Turbo code problem indicating the
decoding obtained by belief revision at convergence is significantly
more likely to be correct.


More information about the Connectionists mailing list