reply on back-propagation fails to separate paper

Pankaj Mehra mehra at aquinas.csl.uiuc.edu
Fri Oct 28 12:44:02 EDT 1988


Hi everybody.

	When I heard Brady et al.'s talk at ICNN-88, I thought
that the results simply pointed out that a correct approach to
classification may not give equal importance to all training
samples. As is well-known, classical back-prop converges to
a separating surface that depends on the LMS error summed uniformly
over all training samples.

	I think that the new results provide a case for attaching
more importance to the elements on concept boundaries. I have been
working on this problem (of trying to characterize "boundary"
elements) off and on, without much success. Basically, geometric
characterizations exist but they are too complex to evaluate.
What is interesting, however, is the fact that complexity of learning
(hence, the time for convergence) depend on the nature of the
separating surface. Theoretical results also involve similar concepts,
e.g. VC-dimension.

	Also notice that if one could somehow "learn" the characteristic
of boundary elements, then one could ignore a large part of the training
sample and still converge properly using a threshold procedure like
that suggested in Geoff's note.

	Lastly, since back-prop is not constrained to always use LMS
as the error function, one wonders if there is an intelligent method
(that can be automated) for constructing error functions based on
the complexity of the separating surface.

- Pankaj Mehra
{mehra%aquinas at uxc.cso.uiuc.edu}


More information about the Connectionists mailing list