TR on HMMs and graphical models

Padhraic J. Smyth pjs at aig.jpl.nasa.gov
Wed Feb 28 14:18:41 EST 1996


The following technical report is available online at:
ftp://aig.jpl.nasa.gov/pub/smyth/papers/TR-96-03.ps.Z


                  PROBABILISTIC INDEPENDENCE NETWORKS FOR
                     HIDDEN MARKOV PROBABILITY MODELS

Padhraic Smyth [a],  David Heckerman [b], and Michael Jordan [c]
[a] Jet Propulsion Laboratory
 and Department of Information and Computer Science, UCI
[b] Microsoft Research
[c] Department of Brain and Cognitive Sciences, MIT


                              Abstract

Graphical techniques for modeling the dependencies of random
variables have been explored in a variety of different areas including
statistics, statistical physics, artificial intelligence, speech
recognition, image processing, and genetics. Formalisms for manipulating
these models have been developed relatively independently in these
research communities. In this paper we explore hidden Markov models
(HMMs) and related structures within the general framework of
probabilistic independence networks (PINs). The paper contains a
self-contained review of the basic principles of PINs. It is shown that
the well-known forward-backward (F-B) and Viterbi algorithms for HMMs
are special cases of more general inference algorithms for arbitrary PINs.
Furthermore, the existence of inference and estimation algorithms for
more general graphical models provides a set of analysis tools for HMM
practitioners who wish to explore a richer class of HMM structures.
Examples of relatively complex models to handle sensor fusion and
coarticulation in speech recognition are introduced and treated within
the graphical model framework to illustrate the advantages of the general
approach.

This TR is available as
Microsoft Research Technical Report TR-96-03, Microsoft Research, Redmond, WA.
and as
AI Lab Memo AIM-1565, Massachusetts Institute of Technology, Cambridge, MA.



More information about the Connectionists mailing list