old paper is now available via ftp ...

thanasis kehagias ST401843%BROWNVM.BITNET at vma.CC.CMU.EDU
Sun Sep 17 14:30:56 EDT 1989




The following OLD paper is now available by anonymous FTP.  To get a copy,
please "ftp" to cheops.cis.ohio-state.edu (128.146.8.62), "cd" to the
pub/neuroprose directory, and "get" the file kehagias.hmm0289.tex.
Please use your own version of LATEX to print it out.

            OPTIMAL CONTROL FOR TRAINING:
            THE MISSING LINK BETWEEN
            HIDDEN MARKOV MODELS
            AND CONNECTIONIST NETWORKS

                  ABSTRACT


For every Hidden Markov Model there is a set of "forward"
probabilities that need to be computed for both the recognition and
the training problem. These probabilities are computed recursively and
hence the computation can be performed by a multistage, feedforward
network that we will call Hidden Markov Model Net (HMMN).  This
network has exactly the same architecture as the standard
Connectionist Network (CN). Furthermore training an Hidden Markov
Model is equivalent to optimizing a function of the HMMN; training a
CN is equivalent to optimizing a function of the CN.  Due to the
multistage architecture, both problems can be seen as Optimal Control
problems. By applying standard Optimal Control techniques we discover
in both problems that certain backpropagated quantities (backward
probabilities for HMMN, backward propagated errors for CN) are of
crucial importance to the solution. So HMM's and CN's are similar in
architecture and training.


More information about the Connectionists mailing list