No Free Lunch? (Delayed reply)
Juergen Schmidhuber
juergen at idsia.ch
Thu Nov 9 10:52:24 EST 1995
In his response to NFL issues, Simon Perkins writes:
> On the other hand, some people claim that there _is_ a general
> sub-class of problems for which it's at all feasible to find a
> solution - perhaps problems whose solutions have low komologrov
> complexity or something - and this might well mean that there _is_ a
> general good learning algorithm for this class of `interesting
> problems'.
A comment on this:
We already know that for a wide variety of non-incremental search
problems, there *is* a theoretically optimal algorithm: Levin's
universal search algorithm (LS) (Ref.: L. A. Levin, Universal
sequential search problems, Problems of Information Transmission,
9(3):265--266, 1973). Essentially, LS generates and tests
solution candidates in order of their Levin complexities,
until a solution is found (Levin complexity is a time-bounded
restriction of Kolmogorov complexity). For instance, suppose there
is an algorithm that solves a certain type of maze task in O(n^3)
steps, where $n$ is a positive integer representing problem size.
Then LS will solve the same task in at most O(n^3) steps (you
may worry about the constant factor buried in the O-notation,
though).
Of course, there are huge classes of search problems that cannot
be solved efficiently (say, in polynomial time), neither by LS
nor by any other method. But most problems in the set of all
possible, well-defined problems are indeed ``uninteresting''.
Admittedly, I do not have a good objective definition of what's
``interesting''. The best I can come up with in the current
context is, somewhat circularily, a possible superset of
interesting problems: ``problems that can be solved efficiently
by an optimal search algorithm''.
Anyway, LS' existence fuels hope: just like there is an optimal,
general search algorithm for many *non-incremental* search problems,
there may be an optimal, general learning algorithm for *incremental*
search problems (``incremental'' means: you may try to use experience
with previous tasks to improve performance on new tasks).
LS by itself is *not* necessarily optimal in incremental learning
situations. For this reason, Marco Wiering and I are currently
combining LS and the recent technique of ``environment-independent
reinforcement acceleration'' (EIRA), currently the only method
that guarantees a lifelong history of success accelerations, even
in unrestricted environments (write-up may follow soon -- related
publications in my home page).
Juergen Schmidhuber
IDSIA, Corso Elvezia 36, Ch-6900-Lugano
http://www.idsia.ch/~juergen
More information about the Connectionists
mailing list