CRG-TR-90-6 and 90-7 requests

Carol Plathan carol at ai.toronto.edu
Thu Nov 22 10:15:39 EST 1990


         PLEASE DO NOT FORWARD TO OTHER NEWSGROUPS OF MAILING LISTS
	 **********************************************************
You may order the following two new technical reports by sending your physical
mailing address to carol at ai.toronto.edu. Pls do not reply to the whole list.

1. CRG-TR-90-6: Speaker Normalization and Adaptation using Second-Order
   Connectionist Networks by Raymond L. Watrous 

2. CRG-TR-90-7:  Learning Stochastic Feedforward Networks by 
   Radford M. Neal

Abstracts follow:

-------------------------------------------------------------------------------
This technical report is an extended version of the paper that was presented
to the Acoustical Society of America in May, 1990.

		SPEAKER NORMALIZATION AND ADAPTATION USING
                   SECOND-ORDER CONNECTIONIST NETWORKS

     			 Raymond L. Watrous

	            Department of Computer Science, 
                        University of Toronto, 
		       Toronto, Canada M5S 1A4

		            CRG-TR-90-6

A method for speaker-adaptive classification of vowels using connectionist
networks is developed. A normalized representation of the vowels is computed
by a speaker-specific linear transformation of observations of the speech
signal using second-order connectionist network units. Vowel classification
is accomplished by a multilayer network which operates on the normalized
speech data. The network is adapted for a new talker by modifying the
transformation parameters while leaving the classifier fixed. This is
accomplished by back-propagating classification error through the classifier
to the second-order transformation units. This method was evaluated for the
classification of ten vowels for 76 speakers using the first two formant
values of the Peterson/Barney data. A classifier optimized on the normalized
data led to a recognition accuracy of 93.2%. When adapted to each speaker
from various initial transformation parameters, the accuracy improved to 96.6%.
When the speaker-dependent transformation and nonlinear classifier were
simultaneously optimized, a vowel recognition accuracy of as high as 97.5%
was obtained. Speaker adaptation using this network also yielded an accuracy
of 96.6%. The results suggest that rapid speaker adaptation resulting in high
classification accuracy can be accomplished by this method.
-------------------------------------------------------------------------------

                LEARNING STOCHASTIC FEEDFORWARD NETWORKS

                             Radford M. Neal
     
	            Department of Computer Science, 
                        University of Toronto, 
		       Toronto, Canada M5S 1A4

                              CRG-TR-90-7

Connectionist learning procedures are presented for "sigmoid" and "noisy-OR"
varieties of stochastic feedforward network. These networks are in the same
class as the "belief networks" used in expert systems.  They represent a
probability distribution over a set of visible variables using hidden
variables to express correlations. Conditional probability distributions can
be exhibited by stochastic simulation for use in tasks such as classification.
Learning from empirical data is done via a gradient-ascent method analogous to
that used in Boltzmann machines, but due to the feedforward nature of the
connections, the negative phase of Boltzmann machine learning is unnecessary.
Experimental results show that, as a result, learning in a sigmoid feedforward
network can be faster than in a Boltzmann machine. These networks have other
advantages over Boltzmann machines in pattern classification and decision
making applications, and provide a link between work on connectionist learning
and work on the representation of expert knowledge.

-------------------------------------------------------------------------------


More information about the Connectionists mailing list