Robustness ?

G.BUGMANN@OAK.CC.KCL.AC.UK UDAH025 at OAK.CC.KCL.AC.UK
Tue Aug 4 10:58:00 EDT 1992


Dear Connectionnists,

A widespread belief is that neural networks are robust 
against the loss of neurons. This is possibly true for 
large Hopfield nets but is certainly wrong for multilayer 
networks trained with backpropagation, a fact mentioned 
by several authors (a list of references can be found in 
our paper described below).  I wonder where this whole 
idea of robustness comes from ? 

It is possibly based on the two following beliefs:
1) Millions of neurons in our brain die each day. 
2) Artificial neural networks have the same properties as
   biological neural networks.
As we are apparently not affected by the loss of neurons we 
are forced to conclude that artificial neural networks should
also not be affected. 

However, belief 2 is difficult to confirm because we do not
really know how the brain operates. As for belief 1, neuronal 
death is well documented during development but I could 
not find a publication covering the adult age. 

Does anyone know of any publications supporting belief 1 ?

Thank you in advance for your reply.

Guido Bugmann

----------------------------------------------------------

The following paper will appear in the proceedings of ICANN'92
published as "Artificial Neural Networks II" by Elsevier.

"Direct approaches to Improving the Robustness of 
 Multilayer Neural Networks"

G. Bugmann, P. Sojka, M. Reiss, M. Plumbley and J. G. Taylor

This paper describes two methods to improve the robustness
of multilayer NN (Robustness is defined by the error induced
by the loss of a hidden node). 
   The first method consists of including a "robustness error" 
term in the error function used in backpropagation. This was 
done with the hope that it would force the network to converge 
to a more robust configuration.
   A typical test was to train a network with 10 hidden nodes
for the XOR function. As only 2 hidden nodes are actually 
necessary, the most robust configuration is then made of 
two groups of 5 hidden neurons sharing the same function. 
Although the modified error function leads to a more robust 
network it increases the traditional functional error and 
does not converge to the most robust configuration.
   The second method is more direct. It consists of testing 
periodically the robustness of the net during normal 
backpropagation and then duplicating the hidden node whose 
loss would be most damaging for the net. For that duplication
we use a prunned hidden node so that the total number of
hidden nodes remains unchanged. 
   This is a very effective technique. It converges to the 
optimal 2 x 5 configuration and does not use much extra 
computation time. The accuracy of the net is not affected 
because training goes on between the "prunning-duplication" 
operations. By using this technique as a complement to the 
classical prunning techniques robustness and generalisation 
can be improved at the same time.

A preprint of the paper can be obtained from:
----------------------------------------------------------
Guido Bugmann
Centre for Neural Networks
Kings College London
Strand
London WC2R 2LS
United Kingdom

Phone (+44) 71 873 2234
FAX (+44) 71 873 2017
email:  G.Bugmann at oak.cc.kcl.ac.uk
-----------------------------------------------------------          
                      



More information about the Connectionists mailing list