Robustness ?

george@minster.york.ac.uk george at minster.york.ac.uk
Wed Aug 5 12:48:40 EDT 1992


> A widespread belief is that neural networks are robust 
> against the loss of neurons. This is possibly true for 
> large Hopfield nets but is certainly wrong for multilayer 
> networks trained with backpropagation, a fact mentioned 
> by several authors (a list of references can be found in 
> our paper described below).

Equally, there are references which say MLP's are fault
tolerant to unit loss. For instance, a study by Bedworth and
Lowe examined a large (approx. 760-20-15) MLP trained to
distinguish the confusable "ee" sounds in English. They
found that for a wide range of faults, a degree of fault
tolerance did exist. Typical fault modes were adding noise
to weights, removing units, setting output of a unit to 0.5.
These were based on fault modes that might arise from various
implementation design choices (see Bolt 1991 and 1991B). Other
main references are Tanaka 1989, Lincoln 1989. For a more
complete review, see Bolt 1991C. A more recent report shows
how fault tolerant MLP's (wrt to weight loss) can be
constructed (see Bolt 1992).

One of the problems is that adding extra capacity to a MLP, in
extra hidden units for example, does not necessarily mean that
back-error propagation will use the extra units to form a redundant
representation. As has been noted by many researchers, the MLP
tends to overgeneralise instead. This leads to very in-fault tolerant
MLP's. However, if a constraint is placed on the network (as in Bugmann
1992) then this will force it to use extra capacity as redundancy which
will lead to fault tolerance. See Neti 1990.

Another result which has bearing on this matter is Abu-Mostafa's
claim that ANN's are better at solving random problems (e.g. "ee"
sounds as in Bedworth) rather than structured problems such as
XOR. [Abu Mostafa 1986] It can also be viewed that the
computational nature of neural networks is such that they map
more easily to a problem whose solution space exhibits adjacency.
If a fault occurs, which results in a shift in solution space,
the function performed by the ANN only changes slightly due
to the nature of the problem. This reasoning is supported by
various studies of fault tolerance in ANN's where for applications
such as XOR, little fault tolerance is found, whereas for "soft"
problems, fault tolerance is found to be possible.

I qualify this last statement since current training techniques do
not tend to produce fault tolerant networks (see Bolt 1992). However,
it is my belief that the computational structure provided by ANN's
does imply a degree of inherent fault tolerance is possible. For instance,
the simple perceptron unit (for Bipolar representations) will suffer
upto D connection losses, where D is the Hamming distance between the
two classes it separates. Note however that for Binary representations,
only 0.5D will be tolerated. This degree of fault tolerance is very
good. However, back-error propagation does not take advantage of it
due to the weight configurations which it produces. An important
feature is that equal loading must be placed on all units, this
will then remove critical components with a neural network (e.g. see
Bolt 1992B)


>                              I wonder where this whole 
> idea of robustness comes from ? 
> 
> It is possibly based on the two following beliefs:
> 1) Millions of neurons in our brain die each day. 
> 2) Artificial neural networks have the same properties as
>    biological neural networks.
> As we are apparently not affected by the loss of neurons we 
> are forced to conclude that artificial neural networks should
> also not be affected. 
> 
> However, belief 2 is difficult to confirm because we do not
> really know how the brain operates. As for belief 1, neuronal 
> death is well documented during development but I could 
> not find a publication covering the adult age. 
> 
> Does anyone know of any publications supporting belief 1 ?

Figures that I have heard of range around thousands rather than
millions... however, I would also like to hear of any publications
supporting this claim.

More interesting are occurances when severe brain damage is
suffered with little effect. Wood (1983) gives an interesting
study of this.

____________________________________________________________
 George Bolt, Advanced Computer Architecture Group,
 Dept. of Computer Science, University of York, Heslington,
 YORK. YO1 5DD.  UK.               Tel: + [44] (904) 432771

 george at minster.york.ac.uk                       Internet
 george%minster.york.ac.uk at nsfnet-relay.ac.uk    ARPA 
 ..!uknet!minster!george                         UUCP
____________________________________________________________




References:

%T Fault Tolerance in Multi-Layer Perceptrons: a preliminary study
%A M.D. Bedworth
%A D. Lowe
%D July 1988
%I RSRE: Pattern Processing and Machine Intelligence Division
%K Note: RSRE is now the DRA

%T A Study of a High Reliable System against Electric Noises and Element Failures
%A H. Tanaka
%D 1989
%J Proceedings of the 1989 International Symposium on Noise and Clutter Rejection in Radars and Imaging Sensors
%E T. Suzuki
%E H. Ogura
%E S. Fujimura
%P 415-20

%T Synergy of Clustering Multiple Back Propagation Networks
%A W P Lincoln
%A J Skrzypek
%J Proceedings of NIPS-89
%D 1989
%P 650-657

%T Fault Models for Artificial Neural Networks
%A G.R. Bolt                      (Bolt 1991)
%D November 1991
%C Singapore
%J IJCNN-91
%P 1918-1923
%V 3

%T Assessing the Reliability of Artificial Neural Networks
%A G.R. Bolt                            (Bolt 1991B)
%D November 1991
%C Singapore
%J IJCNN-91
%P 578-583
%V 1

%T Investigating Fault Tolerance in Artificial Neural Networks
%A G.R. Bolt                       (Bolt 1991C)
%D March 1991
%O Dept. of Computer Science
%I University of York, Heslington, York   UK
%R YCS 154
%K Neuroprose ftp: bolt.ft_nn.ps.Z

%T Fault Tolerant Multi-Layer Perceptrons
%A G.R. Bolt
%A J. Austin
%A G. Morgan
%D August 1992
%O Computer Science Department
%I University of York, UK
%R YCS 180
%K Neuroprose ftp: bolt.ft_mlp.ps.Z

%T Maximally fault-tolerant neural networks: Computational Methods and Generalization
%A C. Neti
%A M.H. Schneider
%A E.D. Young
%X Preprint,
%D June 15, 1990

%A Y.S. Abu-Mostafa
%B Complexity in Information Theory
%T Complexity of random problems
%D 1986
%I Springer-Verlag

%T Implications of simulated lesion experiments for the interpretation of lesions in real nervous systems
%A C. Wood
%B Neural Models of Language Processes
%E M.A. Arbib
%E D. Caplan
%E J.C. Marshall
%I New York: Academic
%D 1983

%T Uniform Tuple Storage in ADAM
%A G. Bolt                              (Bolt 1992B)
%A J. Austin
%A G. Morgan
%J Pattern Recognition Letters
%D 1992
%P 339-344
%V 13




More information about the Connectionists mailing list