New book: NEURAL NETWORKS FOR SPEECH AND SEQUENCE RECOGNITION
Yoshua Bengio
bengioy at IRO.UMontreal.CA
Fri Jan 12 13:15:16 EST 1996
NEW BOOK!
NEURAL NETWORKS FOR SPEECH AND SEQUENCE RECOGNITION
Yoshua BENGIO
Learning algorithms for sequential data are crucial
in many applications, in fields such as speech recognition,
time-series prediction, control and signal monitoring.
This book applies the techniques of artificial neural
networks, in particular recurrent networks, time-delay
networks, convolutional networks, and hidden Markov
models, using real world examples. Highlights
include basic elements for the practical application
of back-propagation and back-propagation through time,
integrating domain knowledge and learning from
examples, and hybrids of neural networks with hidden
Markov models.
International Thomson Computer Press
ISBN 1-85032-170-1
This book is available at bookstores near you, or
from the publisher:
In the US: US$52.95
800-842-3636, fax 606-525-7778, or 800-865-5840, fax 606-647-5013
In Canada: CA$73.95
416-752-9100 ext 444, fax 416-752-9646
On the Internet:
http://www.thomson.com/itcp.html
http://www.thomson.com/orderinfo.html
americas-info at list.thomson.com (in the Americas)
row-info at list.thomson.com (rest of the World)
Contents
1 Introduction
1.1 Connectionist Models
1.2 Learning Theory
2 The Back-Propagation Algorithm
2.1 Introduction to Back-Propagation
2.2 Formal Description
2.3 Heuristics to Improve Convergence and Generalization
2.4 Extensions
3 Integrating Domain Knowledge and Learning from Examples
3.1 Automatic Speech Recognition
3.2 Importance of Pre-processing Input Data
3.3 Input Coding
3.4 Input Invariances
3.5 Importance of Architecture Constraints on the Network
3.6 Modularization
3.7 Output Coding
4 Sequence Analysis
4.1 Introduction
4.2 Time Delay Neural Networks
4.3 Recurrent Networks
4.4 BPS
4.5 Supervision of a Recurrent Network Does Not Need to Be Everywhere
4.6 Problems with Training of Recurrent Networks
4.7 Dynamic Programming Post-Processors
4.8 Hidden Markov Models
5 Integrating ANNs with Other Systems
5.1 Advantages and Disadvantages of Current Algorithms for ANNs
5.2 Modularization and Joint Optimization
6 Radial Basis Functions and Local Representation
6.1 Radial Basis Functions Networks
6.2 Neurobiological Plausibility
6.3 Relation to Vector Quantization, Clustering, and Semi-Continuous HMMs
6.4 Methodology
6.5 Experiments on Phoneme Recognition with RBFs
7 Density Estimation with a Neural Network
7.1 Relation Between Input PDF and Output PDF
7.2 Density Estimation
7.3 Conclusion
8 Post-Processors Based on Dynamic Programming
8.1 ANN/DP Hybrids
8.2 ANN/HMM Hybrids
8.3 ANN/HMM Hybrid: Phoneme Recognition Experiments
8.4 ANN/HMM Hybrid: Online Handwriting Recognition Experiments
References
Index
--
Yoshua Bengio
Professeur Adjoint, Dept. Informatique et Recherche Operationnelle
Pavillon Andre-Aisenstadt #3339 , Universite de Montreal,
Dept. IRO, CP 6128, Succ. Centre-Ville,
2920 Chemin de la tour, Montreal, Quebec, Canada, H3C 3J7
E-mail: bengioy at iro.umontreal.ca Fax: (514) 343-5834
web: http://www.iro.umontreal.ca/htbin/userinfo/user?bengioy
or http://www.iro.umontreal.ca/labs/neuro/
Tel: (514) 343-6804. Residence: (514) 738-6206
More information about the Connectionists
mailing list