Ph.D. Thesis: VLSI Architectures for Evolutive Neural Models
Juan M. Moreno
moreno at eel.upc.es
Tue Jun 20 09:55:09 EDT 1995
FTP-host: ftp.upc.es (147.83.98.7)
FTP-file: /upc/eel/moreno_vlsi_94.tar (2 MB compressed, 5.6 MB uncompressed, 184 pages)
The following Ph.D. Thesis is now available by anonymous ftp.
FTP instructions can be found at the end of this message.
--------------------------------------------------------------------
VLSI ARCHITECTURES FOR EVOLUTIVE NEURAL MODELS
J.M. Moreno Arostegui
Technical University of Catalunya
Department of Electronics Engineering
RESUME
In the last years there has been an increasing interest in the
research field related to the artificial neural network models. The reason
for this interest has been the development of advanced tools and techniques
for microelectronics design, which have permitted to translate into
efficient physical realizations the theoretical connectionist models.
However, there are several problems associated to the classical artificial
neural network models, related basically to their convergence properties,
and to the necessity to define heuristically the proper network structure
for a particular problem In order to alleviate these problems, evolutive
neural models offer the possibility to construct automatically during the
training process the proper network structure able to handle efficiently a
certain task. Furthermore, these neural models allows for establishing
incremental learning schemes, so that new knowledge can be easily
incorporated in the network, without the necessity to perform from scratch
a new comple te training process.
The present work tries to offer efficient solutions, under the form
of VLSI microelectronics architectures, for the eventual realization of
systems based on the evolutive neural paradigms.
An exhaustive analysis on the different types of evolutive neural
models has been first performed. The goal of this analysis is to select
those evolutive neural models whose data flow is suitable for an eventual
hardware implementation. As a result, the incremental evolutive neural
models have been selected as the most appropriate ones in the case a
hardware realization is envisaged.
Afterwards, the improvement of the convergence properties of
evolutive neural models has been considered. This improvement is required
so as to allow for more efficient physical implementations able to face
real world tasks. As a result, three different methods have been proposed
so as to enhance the network construction process provided by evolutive
neural models.
The next step towards the implementation of evolutive neural models
has consisted of the selection of the most suitable hardware architectures
in order to realize the data flow imposed by the corresponding training an
recall phases associated to these neural models. As a previous step, an
algorithm vectorization process has been performed, so as to detect the
basic operations required by the training and recall schemes. Then, by
analyzing the efficiency offered by different hardware architectures in
carrying out these basic operations, we have selected two architectures as
the most suitable for an eventual hardware implementation.
Bearing in mind the results provided by the previous architecture
analysis, a digital architecture has been proposed. This architecture is
able to organize properly its resources, so as to match the requirements
imposed by the corresponding training and recall phases, being thus capable
of emulating the two architectures selected by the analysis indicated
previously. The architecture is organized as an array of processing units,
which can be configured in order to provide an specific array organization.
A specific RISC (Reduced Instruction Set Computer) has been developed in
order to realize these processing units. This processor has a generic
enough instruction set, which permits the efficient emulation (both in
terms of speed and compactness) of a wide range of evolutive neural models.
Finally, an analog systolic architecture has been proposed, which
allows also for the physical implementation of the evolutive neural models
indicated previously. This architecture has been developed using a systolic
modular principle, so that it permits to emulate different neural models
just by changing the functionality of the building blocks which constitute
its processing units. The main advantage offered by this architecture is
the possibility to develop compact systems capable to provide high
processing rates, being thus suitable for those tasks where an integrated
signal processing scheme is required.
----------------------------------------------------------------------------
FTP instructions:
unix> ftp ftp.upc.es (147.83.98.7)
Name: anonymous
Password: (your e-mail address)
ftp> cd /upc/eel
ftp> bin
ftp> get moreno_vlsi_94.tar
ftp> bye
unix> tar xvf moreno_vlsi_94.tar
As a result, you get 12 different compressed postscript files (5.6 MB).
Just uncompress these files and print them on your local printer.
Sorry, but there are no hard copies available.
Regards,
----------------------------------------------------------------------------
|| Juan Manuel Moreno Arostegui || ||
|| || ||
|| Dept. Enginyeria Electronica || Tel. : +34 3 401 74 88 ||
|| Universitat Politecnica de Catalunya || ||
|| Modul C-4, Campus Nord || Fax : +34 3 401 67 56 ||
|| c/ Gran Capita s/n || ||
|| 08034-Barcelona || E-mail : moreno at eel.upc.es ||
|| SPAIN || ||
----------------------------------------------------------------------------
More information about the Connectionists
mailing list