The transposed weight matrix hassle
shams@maxwell.hrl.hac.com
shams at maxwell.hrl.hac.com
Mon Nov 4 16:58:43 EST 1991
There are a couple of different methods used for dealing with this problem
that areeffective to a certain extend. First, a three phase conflict-free
routing method has been proposed [1] that implicitly implements the
matrix inversion during the back-propagation learning phase. This
method is generally applicable to fine-grain architectures and sparsely
connected neural nets. The second mapping method proposed by Kung &
Hwang [2], efficiently time-multiplexes the synaptic interconnections of a
neural network onto the physical connections of a 1-D ring systolic array.
In this mapping, the matrix inversion operation requiredduring the
learning phase can be performed by communicating neuron activation
values between the processors (as oppose to the partial sums used in
the feed-forward case).
[1] V. K. Prasanna Kumar and K. W. Przytula, "Algorithmic
Mapping of Neural Network Models onto Parallel SIMD Machines,"
Proceedings of the Inter. Conf. on Appl. Spec. Array Proc., Princeton,
NJ, Ed. S. Y. Kung, E. E. Swartzlander, J. A. B. Fortes and
K. W. Przytula, 1990.
[2] S. Y. Kung and J. N. Hwang, RA Unified Systolic
Architecture for Artificial Neural Networks.S Journal of Parallel and
Distributed Computing. 6: 358-387, 1989.
Soheil Shams
Hughes Research Labs
More information about the Connectionists
mailing list