ICML Workshop on Hierarchy and Memory in Reinforcement Learning

David Andre dandre at cs.berkeley.edu
Sat Feb 10 23:38:44 EST 2001


Workshop on Hierarchy and Memory in Reinforcement Learning

ICML 2001, Williams College, June 28, 2001

Call for Participation

In recent years, much research in reinforcement learning has focused
on learning, planning, and representing knowledge at multiple levels
of temporal abstraction.  If reinforcement learning is to scale to
solving larger, more real-world-like problems, it is essential to
consider a hierarchical approach in which a complex learning task is
decomposed into subtasks.  It has been shown in recent and past work
that a hierarchical approach substantially increases the efficiency
and abilities of RL systems.  

Early work in reinforcement learning showed faster learning resulted
when tasks were decomposed into behaviors (Lin, 1993; Mahadevan and
Connell, 1992; Singh et al 1994).  However, these approaches were
mostly based on modular, but not hierarchical decompositions.  More
recently, researchers have proposed various models, the most widely
recognized being Hierarchies of Abstract Machines (HAMs) (Parr, 1998),
options (Sutton, Precup and Singh, to appear), and MAXQ value function
decomposition (Dietterich, 1998).  A key technical breakthrough that
enabled these approaches is the use of reinforcement learning over
semi-Markov decision processes (Bradtke and Duff, Mahadevan et al,
1997, Parr, 1998).  Although these approaches speed up learning
considerably, they still assume the underlying environment is
accessible (i.e. not perceptually aliased).

A major direction of research into scaling up hierarchical methods is
extend them to domains where the underlying states are hidden (i.e.,
partially observable Markov decision processes, or POMDPs).  Over the
past year, there has been a surge of interest in applying hierarchical
reinforcement learning (HRL) methods to such partially observable
domains. Researchers are investigating techniques such as memory,
state abstraction, offline decomposition, action abstraction, and many
others to simplify the problem of learning near-optimal behaviors as
they attack increasingly complex environments.  

This workshop will be an opportunity for the researchers in this
growing field to share knowledge and expertise on the topic, open
lines of communication for collaboration, prevent redundant research,
and possibly agree on standard problems and techniques.  

The format of the workshop is designed to encourage discussion and
debate.  There will be approximately four invited talks by senior
researchers.  Ample time between talks will be provided for
discussion. Additionally, there will be a a NIPS-like poster session
(complete with plenary poster previews).  We will also be providing a
partially observable problem domain that participants can optionally
use to test their techniques, which may provide for a underlying
common experience for comparing and contrasting techniques.  The
discussion will focus not only on the various techniques and theories
for using memory in reinforcement learning, but also on higher order
questions, such as ``Must hierarchy and memory be combined for
practical RL systems?''  and ``What forms of structured memory are
useful for RL?''.

We are thus seeking attendees with the following interests:

* Basic technologies applied to the problem of hierarchy and memory in RL
  - Function approximation
  - State abstraction
  - Finite memory methods

* Training Issues for hierarchical RL
  - Shaping
  - Knowledge tranfser across tasks/subproblems

* Hierarchy Construction issues
  - Action models
  - Action decomposition methods
  - Automatic acquisition of hierarchical structure
  - Learning state abstractions

* Applications
  - Hierarchical motor control (especially with feedback).
  - Hierarchical visual attention/gaze control
  - Hierarchical navigation

Submissions:

 To participate in the workshop, please send a email message to one of
 the two organizers, giving your name, address, email address, and a
 brief description of your reasons for wanting to attend.  In
 addition, if you wish to present a poster, please send a
 short (< 5 pages in ICML format) paper in postscript or PDF format to
 the organizers.  If you have questions, please feel free to contact
 us.

Important Dates:

 Deadline for workshop submissions:	Mar 22, 2001
 Notification of acceptance:		April 9, 2001
 Final version of workshop papers due:	April 31, 2001
 The workshop itself:			June 28, 2001

Committee:

 Workshop Chairs:
	David Andre (dandre at cs.berkeley.edu)
	Anders Jonsson (ajonsson at cs.umass.edu)

 Workshop Committee:
	Andrew Barto
	Natalia Hernandez
	Sridhar Mahadevan
	Ronald Parr
	Stuart Russell

Please see http://www-anw.cs.umass.edu/~ajonsson/icml/ for more
information, including directions for formatting the submissions and
details on the benchmark domain. 







More information about the Connectionists mailing list