Pointers to papers on the effect of implementation precision
MURRE%rulfsw.LeidenUniv.nl@BITNET.CC.CMU.EDU
MURRE%rulfsw.LeidenUniv.nl at BITNET.CC.CMU.EDU
Tue Jan 22 11:52:00 EST 1991
Dear connectionist researchers,
We are in the process of designing a new neurocomputer. An important
design consideration is precision: Should we use 1-bit, 4-bit,
8-bit, etc. representations for weights, activations, and other
parameters? We are scaling-up our present neurocomputer, the BSP400
(Brain Style Processor with 400 processors), which uses 8-bit internal
representations for activations and weights, but activations are
exchanged as single bits (using partial time-coding induced by floating
thresholds). This scheme does not scale well.
Though we have tracked down scattered remarks in the literature on
precision, we have not been able to find many systematic studies on this
subject. Does anyone know of systematic simulations or analytical
results of the effect of implementation precision on the performance of
a neural network? In particular we are interested in the question of how
(and to what extent) limited precision (i.e., 8-bits) implementations
deviate from, say, 8-byte, double precision implementations.
The only systematic studies we have been able to find so far deal with
fault tolerance, which is only of indirect relevance to our problem:
Brause, R. (1988). Pattern recognition and fault tolerance in non-linear
neural networks. Biological Cybernetics, 58, 129-139.
Jou, J., & J.A. Abraham (1986). Fault-tolerant matrix arithmetic and
signal processing on highly concurrent computing structures. Proceedings
of the IEEE, 74, 732-741.
Moore, W.R. (1988). Conventional fault-tolerance and neural computers.
In: R. Eckmiller, & C. Von der Malsburg (Eds.). Neural Computers. NATO
ASI Series, F41, (Berling: Springer-Verlag), 29-37.
Nijhuis, J., & L. Spaanenburg (1989). Fault tolerance of neural
associative memories. IEE Proceedings, 136, 389-394.
Thanks!
More information about the Connectionists
mailing list