Efficient parallel Backprop
Hal McCartor
hal at asi.com
Wed Nov 6 16:37:22 EST 1991
In response to the recent question about running BP on parallel
hardware:
The Backpropagation algorithm can be run quite efficiently on parallel
hardware by maintaining a transpose of the output layer weights on
the hidden node processors and updating them in the usual manner so
that they are always maintained as the exact transpose of the output
layer weights. The error on the output nodes is broadcast to all
hidden nodes simultaneously where each multiplies it by the appropriate
transpose weight to accumulate an error sum. The transpose weights
can also be updated in parallel making the whole process quite efficient.
This technique is further explained in Advances in Neural Information
Processing, Volume 3, page 1028, in the paper, Back Propagation
Implementation on the Adaptive Solutions CNAPS Neurocomputer Chip.
Hal McCartor
More information about the Connectionists
mailing list