Boolean Models(GSN)
ecdbcf@ukc.ac.uk
ecdbcf at ukc.ac.uk
Mon Feb 25 12:07:30 EST 1991
Dear Connectionists,
Most people who read this mail will probably be working with
continuous/analogue models. There is, however, a growing interest in
Boolean neuron models, and some readers might be interested to know
that I have recently successfully completed a Ph.D thesis which deals
with a particular kind of Boolean neuron. Some brief details are given
below, together with some references to more detailed material.
-----------------------------------------------------------------------
Abstract
This thesis is concerned with the investigation of Boolean neural
networks based on a novel RAM-based Goal-Seeking Neuron(GSN). Boolean
neurons are particularly suited to the solution of Boolean or logic
problems such as the recognition and associative recall of binarised
patterns.
One main advantage of Boolean neural networks is the ease with which
they can be implemented in hardware. This can result in very fast
operation. The GSN has been formulated to ensure this implementation
advantage is not lost.
The GSN model operates through the interaction of a number of local low
level goals and is applicable to practical problems in pattern
recognition with only a single pass of the training data(one-shot
learning).
The thesis explores different architectures for GSNs (feed-forward,
feedback and self-organising) together with different learning rules,
and investigates a wide range of alternative configurations within
these three architectures. Practical results are demonstrated in the
context of a character recognition problem.
-----------------------------------------------------------------------
Highlights of GSNs, Learning Algorithms, Architectures and Main
Contributions
The main advantage of RAM-based neural networks in comparison with
networks based on sum-of-products functions is the ease with which they
can be implemented in hardware. This derives from their essentially
logical rather than continuous nature.
The GSN model has a natural propensity to solve the main problems
associated with other RAM-based neurons. Specific classes of
computational activity can be more appropriately realised by using a
particular goal seeking function, and different kinds of goal seeking
functions can be sought in order to provide a range of suitable
behaviour, creating effectively a family of GSNs.
The main experimental results have demonstrated the viability of the
one-shot learning algorithms: partial pattern association,
quasi-self-organisation, and self-organisation. The one-shot learning
is only possible because of the the GSN's ability to validate the
possibility of learning a given input pattern using a single
presentation.
The partial pattern association and the quasi-self-organising learning
have been applied in feed-forward architectures. These two kinds of
learning have given similar performance, though the
quasi-self-organising learning gives slightly better results when a
small training size is considered.
The work reported has established the viability and basic effectiveness
of the GSN concept. The GSN proposal provides a new range of
computational units, learning algorithms, architectures, and new
concepts related to the fundamental processes of computation using
Boolean networks. In all of these ideas further modifications,
extensions, and applications can be considered in order fully to
establish Boolean neural networks as a strong candidate for solving
Boolean-type problems. A great deal of additional research can be
identified for immediate investigation as follows.
One of the most important contributions of this work is the idea of
flexible local goals in RAM-based neurons which allows the application
of RAM-based neurons and architectures to a wider range of problems.
The definition of the goal seeking functions for all the GSN models
used in the feed-forward, feedback and self-organising architectures
are important because they provide local goals which try to maximise
the memory capacity and to improve the recall of correct output
patterns.
Although the supervised pattern association learning is not the kind of
learning most suitable for use with GSN networks, because it demands
multi-presentations of the training set and causes a fast saturation of
the neurons' contents, the variety of solutions presented to the
problem of conflict of learning can help to achieve correct learning
with a relatively small number of activations compared to the
traditional way of erasing a path without taking care to keep the
maximum number of stored patterns.
The partial pattern association, quasi-self-organising, and the
self-organising learning have managed to break away from the
traditional necessity for many thousands of presentations of the
training set, and instead have concentrated on providing one-shot
learning. This is made possible by the propagation of the undefined
value between the neurons in conjunction with the local goal used in
the validating state.
Due to the partial coverage area and the limited functionality of the
pyramids, which can cause an inability to learn particular patterns, it
is important to change the desired output patterns in order to be able
to learn these classes. The network produces essentially self-desired
output patterns which are similar to the desired output patterns, but
not necessarily the same. The differences between the desired output
patterns and the self-desired output patterns can be observed in the
learning phase by looking at the output values of each pyramid and the
desired output values.
The definition of the self-desired and the learning probability recall
rules have provided a way of sensing the changes in the desired output
patterns, and of achieving the required pattern classification.
The principle of low connectivity and partial coverage area make
possible more realistic VLSI implementations in terms of memory
requirements and overall connection complexity associated with the
traditional problem of fan-in and fan-out for high connectivity
neurons.
The feedback architecture is able to achieve associative recall and
pattern completion, demonstrating that it is possible to have a cascade
of feedback networks that incrementally increases the similarity
between a prototype and the output patterns. The utilisation of the
freeze feedback operation has given a high percentage of correct
convergences and fast stabilisation of the output patterns.
The analysis of the saturation problem has demonstrated that the
traditional way of using uniform connectivity for all the layers
impedes the advance of the learning process and many memory addresses
remain unused. This is because saturation is not at the same level for
each of the layers. Thus, a new approach has been developed to assign
a varied connectivity to the architecture which can achieve a better
capacity of learning, a lower level of saturation and a smaller residue
of unused memory.
In terms of architectures and learning, an important result is the
design of the GSN self-organising network which incorporates some
principles related to the Adaptive Resonance Theory(ART). The
self-organising network contains intrinsic mechanisms to prevent the
explosion of the number of clusters necessary for self-stabilising a
given training pattern set. Several interesting properties are found
in the GSN self-organising network such as: attention, discrimination,
generalisation, self-stabilisation, and so on.
References
@conference{key210,
author = "D L Bisset And E C D B C Filho And M C Fairhurst",
title = "A Comparative study of neural network structures for
practical application in a pattern recognition enviroment",
publisher= "IEE",
booktitle= "Proc. First IEE International Conference on Artificial Neural Networks",
address = "London, UK",
month = "October",
pages = "378-382",
year = "1989"
}
@conference{key214,
author = "E C D B C Filho And D L Bisset And M C Fairhurst",
title = "A Goal Seeking Neuron For {B}oolean Neural Networks",
publisher= "IEEE",
booktitle= "Proc. International Neural Networks Conference",
address = "Paris, France",
month = "July",
volume = "2",
pages = "894-897",
year = "1990"
}
@article{key279,
author = "E C D B C Filho And D L Bisset And M C Fairhurst",
title = "Architectures for Goal-Seeking Neurons",
journal= "International Journal of Intelligent Systems",
publisher= "John Wiley & Sons, Inc",
note = "To Appear",
year = "1991"
}
@article{key280,
author = "E C D B C Filho And M C Fairhurst And D L Bisset",
title = "Adaptive Pattern Recognition Using Goal-Seeking Neurons",
journal= "Pattern Recognition Letters",
publisher= "North Holland",
month = "March"
year = "1991"
}
All the best,
Edson ... Filho
-- Before 10-Mar-91 ----------------------------------------------------------
! Edson Costa de Barros Carvalho Filho ! ARPA: ecdbcf%ukc.ac.uk at cs.ucl.ac.uk !
! Electronic Engineering Laboratories ! UUCP: ecdbcf at ukc.ac.uk !
! University of Kent at Canterbury ! Phone: (227) 764000x3718 !
! Canterbury Kent CT2 7NT England ! !
-- After 10-Mar-91 -----------------------------------------------------------
! Universidade Federal de Pernambuco ! e-mail: edson at di0001.ufpe.anpe.br !
! Departamento de Informatica ! Phone: (81) 2713052 !
! Av. Prof. Luis Freire, S/N ! !
! Recife --- PE --- Brazil --- 50739 ! !
------------------------------------------------------------------------------
More information about the Connectionists
mailing list