Robustness to what ?
Javier Movellan
jm2z+ at andrew.cmu.edu
Mon Aug 5 10:46:36 EDT 1991
Robustness to what?
* Damage ?
* Effects of noise in input ?
* Effect of noise in teacher ?
Traditionally in statistics robustness of an estimator is understood as
the the resistance of the estimates to the effects of a wide variety of
noise distributions.
The key point here is VARIETY. So we may have estimators that behave
very well under Gaussian noise but deteriorate under other types of
noise (non-robust) and estimators that behave OK but sub-optimally under
very different types of noise (robust). Robust estimators are advised
when the form of the noise is unknown. Maximum likelihood estimators are
a good choice when the form of the noise is known.
In practice robustness is measured by analyzing how the estimator
behaves under three benchmark noise distributions. These distributions
represent low tail, normal tail and large tail conditions.
Things are a bit more complicated in the neural nets environment for we
are trying to estimate functions instead of points, and unlike linear
regression we have problems with multiple minima.
For statistical theory on robust estimation as applied to linear
regression see:
Li G(1985) Robust Regression, in Hoaglin, Mosteller and Tukey:
Exploring data, tables, trends, and shapes. New York, John Wiley.
For application of these ideas to the back-prop environment see:
Movellan J(1991) Error Functions to Improve Noise Resistance and
Generalization in Backpropagation Networks} in {\it Advances in Neural
Networks\/}, Ablex Publishing Corporation.
Hanson S has an earlier paper on the same theme. I believe the paper is
in one of the NIPS proceedings but I do not have the exact reference
with me.
-Javier
More information about the Connectionists
mailing list