paper available: Gibbs sampling for HME

Robert Jacobs jacobs at psych.stanford.edu
Tue Nov 7 12:58:05 EST 1995


The following paper is available via anonymous ftp from the neuroprose
archive.  The paper has been accepted for publication in the "Journal
of the American Statistical Association."  The manuscript is 26 pages.
(Unfortunately, hardcopies are not available.)

FTP-host: archive.cis.ohio-state.edu
FTP-filename: /pub/neuroprose/jacobs.hme_gibbs.ps.Z



       Bayesian Inference in Mixtures-of-Experts and Hierarchical
  Mixtures-of-Experts Models With an Application to Speech Recognition

          Fengchun Peng, Robert A. Jacobs, and Martin A. Tanner

 Machine classification of acoustic waveforms as speech events is
 often difficult due to context-dependencies.  A vowel recognition
 task with multiple speakers is studied in this paper via the use
 of a class of modular and hierarchical systems referred to as
 mixtures-of-experts and hierarchical mixtures-of-experts models.
 The statistical model underlying the systems is a mixture model in
 which both the mixture coefficients and the mixture components are
 generalized linear models.  A full Bayesian approach is used as a
 basis of inference and prediction.  Computations are performed using
 Markov chain Monte Carlo methods.  A key benefit of this approach
 is the ability to obtain a sample from the posterior distribution
 of any functional of the parameters of the given model.  In this
 way, more information is obtained than provided by a point estimate.
 Also avoided is the need to rely on a normal approximation to the
 posterior as the basis of inference.  This is particularly important
 in cases where the posterior is skewed or multimodal.  Comparisons
 between a hierarchical mixtures-of-experts model and other pattern
 classification systems on the vowel recognition task are reported.
 The results indicate that this model showed good classification
 performance, and also gave the additional benefit of providing for
 the opportunity to assess the degree of certainty of the model in
 its classification predictions.



More information about the Connectionists mailing list