Prediction and Automatic Task Decomposition
Rafal Salustowicz
rafal at idsia.ch
Thu Sep 17 10:04:18 EDT 1998
LEARNING TO PREDICT THROUGH PROBABILISTIC INCREMENTAL
PROGRAM EVOLUTION AND AUTOMATIC TASK DECOMPOSITION
Rafal Salustowicz Juergen Schmidhuber
Technical Report IDSIA-11-98
Analog gradient-based recurrent neural nets can learn complex
prediction tasks. Most, however, tend to fail in case of long
minimal time lags between relevant training events. On the other
hand, discrete methods such as search in a space of event-memori-
zing programs are not necessarily affected at all by long time
lags: we show that discrete "Probabilistic Incremental Program
Evolution" (PIPE) can solve several long time lag tasks that have
been successfully solved by only one analog method ("Long Short-
Term Memory" - LSTM). In fact, sometimes PIPE even outperforms
LSTM. Existing discrete methods, however, cannot easily deal with
problems whose solutions exhibit comparatively high algorithmic
complexity. We overcome this drawback by introducing filtering,
a novel, general, data-driven divide-and-conquer technique for
automatic task decomposition that is not limited to a particular
learning method. We compare PIPE plus filtering to various analog
recurrent net methods.
ftp://ftp.idsia.ch/pub/rafal/TR-11-98-filter_pipe.ps.gz
http://www.idsia.ch/~rafal/research.html
Rafal & Juergen, IDSIA, Switzerland www.idsia.ch
More information about the Connectionists
mailing list