neural hw paper available
Christian Lehmann
Christian.Lehmann at di.epfl.ch
Tue Apr 12 10:06:24 EDT 1994
N E W P A P E R S
AVAILABLE
O N
NEUROCOMPUTING HARDWARE
The following papers are now available via anonymous ftp from the neuroprose
archive. There are three papers on different subjects related to our work on
neurocomputing hardware.
Should you experience any problem, please, do not hesitate to contact us:
Christian Lehmann
The MANTRA Center for neuromimetic systems
MANTRA-DI-EPFL
CH-1015 Lausanne
Switzerland
or
lehmann at di.epfl.ch
---------
Author : M. A. Viredaz
Title : MANTRA I: An SIMD Processor Array for Neural Computation
In : Proceedings of the Euro-ARCH'93 Conference
FTP-host: archive.cis.ohio-state.edu
FTP-file: pub/neuroprose/viredaz.e-arch93.ps.Z
Length : 12 pages
note : figure 7 is missing (photography of boards), contact us if wanted.
Abstract: This paper presents an SIMD processor array dedicated to the
implementation of neural networks. The heart of this machine is a
systolic array of simple processing elements (PEs). A VLSI custom
chip containing 2x2 PEs was built. The machine is designed to
sustain sufficient instruction and data flows to keep a utilization
rate close to 100%. Finally, this computer is intended to be
inserted in a network of heterogeneous nodes.
---------
Authors : P. Ienne, M. A. Viredaz
Title : GENES IV: A Bit-Serial Processing Element for a Multi-Model
Neural-Network Accelerator
In : Proceedings of the International Conference on Application Specific
Array Processors, 1994
FTP-host: archive.cis.ohio-state.edu
FTP-file: pub/neuroprose/ienne.genes.ps.Z
Length : 12 pages
Abstract: A systolic array of dedicated processing elements (PEs) is
presented as the heart of a multi-model neural-network accelerator.
The instruction set of the PEs allows the implementation of several
widely-used neural models, including multi-layer Perceptrons with
the backpropagation learning rule and Kohonen feature maps. Each PE
holds an element of the synaptic weight matrix. An instantaneous
swapping mechanism of the weight matrix allows the implementation
of neural networks larger than the physical PE array. A
systolically-flowing instruction accompanies each input vector
propagating in the array. This avoids the need of emptying and
refilling the array when the operating mode of the array is
changed. Both the GENES IV chip, containing a matrix of 2x2 PEs,
and an auxiliary arithmetic circuit have been manufactured and
successfully tested. The MANTRA I machine has been built around
these chips. Peak performances of the full system are between 200
and 400 MCPS in the evaluation phase and between 100 and 200 MCUPS
during the learning phase (depending on the algorithm being
implemented).
---------
Author : P. Ienne
Title : Architectures for Neuro-Computers: Review and Performance Evaluation
In : EPFL Computer Science Department Technical Report 93/21
FTP-host: archive.cis.ohio-state.edu
FTP-file: pub/neuroprose/ienne.nnarch.ps.Z
Length : 63 pages
Abstract: As the field of neural-network matures toward real-world
applications, a need for hardware systems to efficiently compute
larger networks arises. Several designs have been proposed in the
recent years and a selection of the more interesting VLSI digital
realizations are reviewed here. Limitations of conventional
performance measurements are briefly discussed and a different
architectural level evaluation approach is attempted by proposing a
number of characteristic performance indexes on idealized
architecture classes. As a results of this analysis, some
conclusions on the advantages and limitations of the different
architectures and on the feasibility of the proposed approach are
drawn. Architectural aspects that require further developments are
also emphasized.
*********
See also
Authors : C. Lehmann, M. Viredaz and F. Blayo
Title : A Generic Systolic Array Building Block for Neural Networks
with On-Chip Learning
In : IEEE Trans. on NN, 4(3), May 1993
More information about the Connectionists
mailing list