Pointers to papers on the effect of implementation precision

J. N. Hwang hwang at uw-isdl.ee.washington.edu
Tue Jan 22 14:33:11 EST 1991


We are in the process of finishing up a paper which gives
a theoretical (systematic) derivation of the finite precision
neural network computation.  The idea is a nonlinear extension
of "general compound operators" widely used for error analysis
of linear computation.  We derive several mathematical formula
for both retrieving and learning of neural networks.  The
finite precision error in the retrieving phase can be written
as a function of several parameters, e.g., number of bits of
weights, number of bits for multiplication and accumlation,
size of nonlinear table-look-up, truncation/rounding or jamming
approaches, and etc.  Then we are able to extend this retrieving
phase error analysis to iterative learning to predict the necessary
number of bits.  This can be shown using a ratio between the
finite precision error and the (floating point) back-propagated
error.  Simulations have been conducted and matched the theoretical
prediction quite well.  Hopefully, we can have a final version of
this paper available to you soon.

Jordan L. Holt and Jenq-Neng Hwang ,"Finite Precision Error
Analysis of Neural Network Hardware Implementation,"
University of Washington, FT-10, Seattle, WA 98195

Best Regards,

Jenq-Neng


More information about the Connectionists mailing list