combining generalizers' guesses

Barak Pearlmutter bap at learning.siemens.com
Mon Jul 26 13:27:39 EDT 1993


To my mind, there is a fourth source of error, which is also addressed
by the ensemble or committee approach.  To your

   * noise in the data
   * sampling error
   * approximation error

I would add

   * randomness in the classifier itself

For instance, if you run backpropagation on the same data twice, with
the same architecture and all the other parameters held the same, it
will still typically come up with different answers.  Eg due to
differences in the random initial weights.  Averaging out this effect
is a guaranteed win.

					--Barak.


More information about the Connectionists mailing list