The transposed weight matrix hassle

J. N. Hwang hwang at pierce.ee.washington.edu
Mon Nov 4 11:02:54 EST 1991


The forward phase of BP is done by the matrix-vector multiplication,
the backward phase is done by the vector-matrix multiplication
consecutively (layer-by-layer).  In addition, the weight updating
itself is done by an outer product operation.  All these three
operations can be elegantly implemented by a "ring array architecture"
with fully pipelining efficiency (pipeline rate = 1).

Some references:

1) S. Y. Kung, J. N. Hwang, "Parallel architectures for artificial
neural networks,"  ICNN'88, San Diego, 1988.

2) J. N. Hwang, J. A. Vlontzos, S. Y. Kung, "A systolic Neural

Network Architecture for Hidden Markov Models," IEEE Trans. on
ASSP, December 1989.

3) S. Y. Kung, J. N. Hwang, " A unified systolic architecture
for artificial neural networks," Journal of parallel and 
distributed computing, Special issue on Neural Networks,
March 1989.

Jenq-Neng Hwang  11/04/91


More information about the Connectionists mailing list