No Free Lunch? (Delayed reply)
simonpe@aisb.ed.ac.uk
simonpe at aisb.ed.ac.uk
Wed Nov 8 07:32:38 EST 1995
Pau Bofill writes:
>Theorems for Search, and I was surprised
>by his (and D.H. Wolpert's) statement that
>
> "any two search algorithms have exactly the same expected
> performance, over the space of all fitness functions."
>
>which seemed to invalidate statements like
>
> "my search algorithm beats your search algorithm according
> to a particular performance measure for the following fitness
> function."
Thess statements aren't contradictory (as you point out in your
message). Averaged over the space of all possible optimization
functions then sure, NFL says that all optimization algorithms are
equal. But of course for any _particular_ optimization problem that
you might be interested in, some algorithms are defintely much better
than others.
I think the importance of the NFL theorem is that it destroys the idea
that there's such a thing as a `super-powerful all-purpose general
learning algorithm' that can optimize any problem quickly. Instead, if
we want a learning algorithm to work well, we have to analyze the
particular problems we're interested in and tailor the learning
algorithm to suit. Essentially we have to find ways of building prior
domain knowledge about a problem into learning algorithms to solve
that problem effectively.
On the other hand, some people claim that there _is_ a general
sub-class of problems for which it's at all feasible to find a
solution - perhaps problems whose solutions have low komologrov
complexity or something - and this might well mean that there _is_ a
general good learning algorithm for this class of `interesting
problems'.
Any comments?
<< Simon Perkins >> Dept of AI, Edinburgh University
S.Perkins at ed.ac.uk
http://www.dai.ed.ac.uk/students/simonpe/
Tel: +44 131 650 3084
More information about the Connectionists
mailing list