Fault Tolerance in ANN's - thesis

George Bolt george at psychmips.york.ac.uk
Thu Mar 4 11:19:37 EST 1993


Thesis Title    "Fault Tolerance in Artificial Neural Networks"
------------
                          by George Bolt, University of York


    Available via ftp, instructions at end of abstract.


Abstract:

This thesis has examined the resilience of artificial neural networks to the
effect of faults. In particular, it addressed the question of whether neural
networks are inherently fault tolerant. Neural networks were visualised from
an abstract functional level rather than a physical implementation level to
allow their computational fault tolerance to be assessed.

This high-level approach required a methodology to be developed for the
construction of fault models. Instead of abstracting the effects of physical
defects, the system itself was abstracted and fault modes extracted from this
description. Requirements for suitable measures to assess a neural network's
reliability in the presence of faults were given, and general measures
constructed. Also, simulation frameworks were evolved which could allow
comparative studies to be made between different architectures and models.

It was found that a major influence on the reliability of neural networks is
the uniform distribution of information. Critical faults may cause failure
for certain regions of input space without this property. This lead to new
techniques being developed which ensure uniform storage.

It was shown that the basic perceptron unit possesses a degree of fault
tolerance related to the characteristics of its input data. This implied that
complex perceptron based neural networks can be inherently fault tolerant
given suitable training algorithms. However, it was then shown that back-error
propagation for multi-layer perceptron networks (MLP's) does not produce a
suitable weight configuration.

A technique involving the injection of transient faults during back-error
propagation training of MLP's was studied. The computational factor in the
resulting MLP's causing their resilience to faults was then identified. This
lead to a much simpler construction method which does not involve lengthy
training times. It was then shown why the conventional back-error propagation
algorithm does not produce fault tolerant MLP's.

It was concluded that a potential for inherent fault tolerance does exist in
neural network architectures, but it is not exploited by current training
algorithms.



$ ftp minster.york.ac.uk
Connected to minster.york.ac.uk.
220 minster.york.ac.uk FTP server (York Tue Aug 25 11:09:10 BST 1992).
Name (minster.york.ac.uk:root): anonymous
331 Guest login ok, send email address as password.
Password:     < insert your email address here >
230 Guest login ok, access restrictions apply.
ftp> cd reports
250 CWD command successful.
ftp> binary
200 Type set to I.
ftp> get YCST-93.zoo
200 PORT command successful.
150 Opening BINARY mode data connection for YCST-93.zoo (1041762 bytes).
226 Transfer complete.
local: YCST-93.zoo remote: YCST-93.zoo
1041762 bytes received in 19 seconds (55 Kbytes/s)
ftp> quit
221 Goodbye
$ zoo -extract YCST-93.zoo *
$ printout

A version of "zoo" compiled for Sun 3's is also available in this directory,
just enter command "get zoo" before quitting ftp.

If you have any problems, please contact me via email.

- George Bolt
email: george at psychmips.york.ac.uk
smail: Dept. of Psychology
       University of York
       Heslington, York
       YO1 5DD    U.K.
tel: +904-433155
fax: +904-433181



More information about the Connectionists mailing list