No subject

David Wolpert dhw at t13.Lanl.GOV
Tue Sep 3 11:24:48 EDT 1991


Thomas Hildebrandt writes:

"I have come to treat interpolation and generalization as the same
animal, since obtaining good generalization is a matter of
interpolating in the right metric space (i.e. the one that best models
the underlying process)."

Certainly I am sympathetic to this point of view. Simple versions
of nearest neighbor interpolation (i.e., memory-based reasoners) do
very well in many circumstances. (In fact, I've published a couple of
papers making just that point.) However it is trivial to construct
problems where the target function is extremely volatile and
non-smooth in any "reasonable" metric; who are we to say that Nature
should not be allowed to have such target functions? Moreover, for a
number of discrete, symbolic problems, the notion of a "metric" is
ill-defined, to put it mildly.

I am not claiming that metric-based generalizers will necessarily do
poorly for these kinds of problems. Rather I'm simply saying that it is a
bit empty to state that

"If the form of the underlying metric space is unknown, then it is a
toss-up whether sigmoidal sheets, RBFs, piece-wise hyperplanar, or any
number of other basis functions will work best."

That's like saying that if the underlying target function is unknown,
then it is a toss-up what hypothesis function will work best.

Loosely speaking, "interpolation" is something you do once you've
decided on the metric. In addition to such interpolation,
"generalization" also involves the preceding step of performing the
"toss up" between metrics in a (hopefully) rational manner.


David Wolpert (dhw at tweety.lanl.gov)


More information about the Connectionists mailing list