XOR and BP

Daniel Glaser bnglaser at tohu0.weizmann.ac.il
Wed Mar 17 08:25:20 EST 1993


Forgive my ignorance, but isn't back-prop with a learning rate of 1
(see Luis B. Almeida's posting of 15.3.93) doing something quite a lot
like random walk ?

David Wolpert writes (15.3.93) "how back-prop does on XOR is only of
historical interest". Is this not because, with XOR, in order to avoid
the local minima you HAVE to do a lot more random walking than
gradient descending ? It is believed that this is not necessary when
using back-prop on most interesting problems.

Historically, XOR has been a standard-bearer for back-prop, as a
simple, intuitive, function which a perceptron cannot learn. Could it
now appear that the whole technique is tainted by association with
this pathological case ?

Daniel Glaser.



More information about the Connectionists mailing list