The "best" way to do learning
David Wolpert
dhw at santafe.edu
Tue Jul 27 10:52:47 EDT 1993
Harris Drucker writes:
>>>
The best method to generate a committee of learning machines is given by
Schapire's algorithm [1].
>>>
Schapire's boosting algorithm is a very interesting technique, which has
now garnered some empirical support.
It should be noted that it's really more a means of improving a single
learning machine than a means of combining separate ones.
More to the point though:
There is no such thing as an a priori "best method" to do *anything*
in machine learning. Anyone who thinks otherwise is highly encouraged
to read Cullen Schaffer's Machine Learning article from Feb. '93.
*At most*, one can say that a method is "best" *given some assumptions*.
This is made explicit in Bayesian analysis.
To my knowledge, boosting has only been analyzed (and found in a certain
sense "best") from the perspective of PAC, VC stuff, etc. Now those formalisms
can lend many insights into the learning process. But not if one isn't
aware of their (highly non-trivial) implicit assumptions. Unfortunately, one
of more problematic aspects of those formalisms is that that they
encourage people to gloss over those implicit assumptions, and make
blanket statements about "optimal" algorithms.
More information about the Connectionists
mailing list