No subject

David Wolpert dhw at t13.Lanl.GOV
Mon Sep 9 11:41:03 EDT 1991


Zoubin Ghahramani writes

"How does one interpret generalization as interpolation in a problem
like n-bit parity? For any given data point, the n nearest neighbours
in input space would all predict an incorrect classification. However,
I wouldn't say that a problem like parity is ungeneralizable.
"

A very good example. In fact, any generalizer which acts in a somewhat
local manner (i.e., looks mostly at nearby elements in the learning
set) has the nice property that for the parity problem, the
larger the training set the *worse* the generalization off of that
training set, for precisely the reason Dr. Ghahramani gives.

Interestingly, for small (number of bits small) versions of the
parity problem, backprop has exactly this property; as the learning
set grows, so does its error rate off of the learning set.

(Dave Rumelhart has told me that this property goes away in 
big versions of parity however.)



David Wolpert


More information about the Connectionists mailing list