Does backprop need the derivative?

Ljubomir Buturovic ljubomir at darwin.bu.edu
Wed Feb 10 21:12:30 EST 1993


Marwan Jabri:

> Regarding the idea of Simplex that has been suggested. The inquirer was
> talking about on-chip learning. Have you in your experiments done a
> limited precision Simplex? Have you tried it on a chip in in-loop mode?
> Philip Leong here has tried a similar idea (I think) a while back.  The
> problem with this approach is that you need to a have a very good guess at
> your starting point as the Simplex will move you from one vertex (feasible
> solution) to another while expanding the weight solution space. 
> Philip's experience is that it does work for small problems when you have
> a good guess!

No, we did not try limited precision Simplex, since the method has
another serious limitation, which is memory complexity. So there is
no point performing such refined studies until this problem is 
resolved, let alone on-chip implementation. The biggest problem we
tried it on succesfully was 11-dimensional (i. e., input samples were
11-dimensional vectors). The initial guess was pseudo-random, like
in back-propagation. In another, 12-dimensional example, it did not
do well (neither did back-prop, but Simplex was much worse), so it
might be true that it needs a good starting point.   

Ljubomir Buturovic
Boston University 
BioMolecular Engineering Research Center
36 Cummington Street, 3rd Floor
Boston, MA 02215

office: 617-353-7123
home:   617-738-6487 



More information about the Connectionists mailing list