Adding noise to training data

Tom English english at sun1.cs.ttu.edu
Thu Nov 7 16:29:00 EST 1991


With regard to the relationship of Parzen estimation and training with
noise added to samples, John Hampshire writes

>  ... but the connectionist model THEN goes on to try to model
>  this new PDF in its connections and ITS set of basis functions.
>  This is what seems desperate to me....

In a sense, it IS desperate.  But an important problem for direct-form
implementations of Parzen estimators (e.g., Specht 1990) is the storage
requirements.  Adding noise to the training samples and training
by back-propagation may be interpreted as a time-expensive approach
to obtaining a space-economical Parzen estimator.  (I'm assuming that
the net requires less memory than direct-form implementations).

Of course, we don't know in advance if a given network is equipped to
realize the Parzen estimator.  I suspect that someone could produce
a case in which the "universal approximator" architecture (Hornik,
Stinchcombe, and White 1989) would achieve a reasonable approximation
only by using more memory than the direct-form implementation.

My thanks to John for (accidentally) posting some interesting comments.

--Tom English

Specht, D.  1990.  Probabilistic neural networks and the polynomial
adaline as complementary techniques for classification.  IEEE Tran.
Neural Networks 1 (1), pp. 111-21.

Hornik, K., Stinchcombe, M., and White, H.  1989.  Multilayer
feedforward nets are universal approximators.  Neural Networks 2,
pp. 359-66.


More information about the Connectionists mailing list