Papers available: modular genetic programming
rosca@cs.rochester.edu
rosca at cs.rochester.edu
Fri Jun 7 18:50:43 EDT 1996
The following papers on discovery of subroutines in Genetic
Programming are now available for retrieval via ftp.
Ftp: ftp://ftp.cs.rochester.edu/pub/u/rosca/gp
WWW: http://www.cs.rochester.edu/u/rosca/research.html
Comments and suggestions are welcome.
Justinian --
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Justinian Rosca Internet: rosca at cs.rochester.edu
University of Rochester Office: (716) 275-1174
Department of Computer Science Fax: (716) 461-2018
Rochester, NY 14627-0226 WWW: http://www.cs.rochester.edu/u/rosca/
EVOLUTION-BASED DISCOVERY OF HIERARCHICAL BEHAVIORS
J.P. Rosca and D.H. Ballard
AAAI-96, The MIT Press, 1996
ftp://ftp.cs.rochester.edu/pub/u/rosca/gp/96.aaai.ps.gz
(7 pages; 100k compressed)
Abstract:
The complexity of policy learning in a reinforcement
learning task deteriorates primarily with the increase of the number
of observations. Unfortunately, the number of observations may be
unacceptably high even for simple problems. In order to cope with
the scale up problem we adopt procedural representations of
policies. Procedural representations have two advantages. First
they are implicit, allowing for good inductive generalization over a
very large set of input states. Second they facilitate
modularization. In this paper we compare several randomized
algorithms for learning modular procedural representations. The main
algorithm, called Adaptive Representation through Learning (ARL) is
a genetic programming extension that relies on the discovery of
subroutines. ARL is suitable for learning hierarchies of
subroutines and for constructing policies to complex tasks. When the
learning problem cannot be solved because the specification is too
loose and the domain is not well understood, ARL will discover
regularities in the problem environment in the form of subroutines,
which often lead to an easier problem solving. ARL was successfully
tested on a typical reinforcement learning problem of controlling an
agent in a dynamic and non-deterministic environment where the
discovered subroutines correspond to agent behaviors.
DISCOVERY OF SUBROUTINES IN GENETIC PROGRAMMING
J.P. Rosca and D.H. Ballard
In: Advances in Genetic Programming II
Edited by P. Angeline and K. Kinnear Jr. MIT Press, 1996.
ftp://ftp.cs.rochester.edu/pub/u/rosca/gp/96.aigp2.dsgp.ps.gz
(25 pages; 439k compressed)
Abstract:
A fundamental problem in learning from observation and interaction
with an environment is defining a good representation, that is a
representation which captures the underlying structure and
functionality of the domain. This chapter discusses an extension of
the genetic programming (GP) paradigm based on the idea that
subroutines obtained from blocks of good representations act as
building blocks and may enable a faster evolution of even better
representations. This GP extension algorithm is called adaptive
representation through learning (ARL). It has built-in mechanisms
for (1) creation of new subroutines through discovery and
generalization of blocks of code; (2) deletion of subroutines. The
set of evolved subroutines extracts common knowledge emerging during
the evolutionary process and acquires the necessary structure for
solving the problem. ARL was successfully tested on the problem of
controlling an agent in a dynamic and non-deterministic environment.
Results with the automatic discovery of subroutines show the
potential to better scale up the GP technique to complex problems.
More information about the Connectionists
mailing list