batch-mode parallel implementations

John Pearson W343 x2385 jcp at vaxserv.sarnoff.com
Wed Oct 16 12:03:09 EDT 1991


Xiru Zhang stated:
>From the point of view of implementation, if a network is not large, there
>is not much you can parallelize if you do per-sample training.

Even in per-sample training one may be able to efficiently exploit a
parallel machine. Each processor simulates the same network but has a
different set of initial weights. The convergence time and performance
of a trained network can be very dependent on the initial weights.
I would appreciate being sent references that discuss this last statement.

John Pearson
David Sarnoff Research Center
CN5300
Princeton, NJ 08543
609-734-2385
jcp at as1.sarnoff.com


More information about the Connectionists mailing list