<html>
<head>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-text-flowed" style="font-family: -moz-fixed;
font-size: 12px;" lang="x-western">REMINDER --- SECOND CALL FOR
POSTERS <br>
<br>
<br>
RSS 2013 WORKSHOP ON "Hierarchical and Structured Learning for
Robotics" <br>
<br>
==================================================================================================
<br>
<br>
Title: Hierarchical and Structured Learning for Robotics <br>
<br>
Organizers:<br>
Gerhard Neumann (<a class="moz-txt-link-abbreviated"
href="mailto:neumann@ias.tu-darmstadt.de">neumann@ias.tu-darmstadt.de</a>,
TU Darmstadt)<br>
George Konidaris (<a class="moz-txt-link-abbreviated"
href="mailto:gdk@csail.mit.edu">gdk@csail.mit.edu</a>, MIT
Computer Science and Artificial Intelligence Laboratory)<br>
Freek Stulp (<a class="moz-txt-link-abbreviated"
href="mailto:freek.stulp@ensta-paristech.fr">freek.stulp@ensta-paristech.fr</a>,
ENSTA - ParisTech)<br>
Jan Peters (<a class="moz-txt-link-abbreviated"
href="mailto:peters@ias.tu-darmstadt.de">peters@ias.tu-darmstadt.de</a>,
TU Darmstadt and MPI for Intelligent Systems)<br>
<br>
WWW: <a class="moz-txt-link-freetext"
href="http://www.ias.informatik.tu-darmstadt.de/Workshops/RSS2013">http://www.ias.informatik.tu-darmstadt.de/Workshops/RSS2013</a><br>
<br>
ABSTRACT:<br>
Learning robot control policies in complex real-world environments
is a major challenge for machine learning due to the inherent high
dimensionality, partial observability and the high costs of data
generation. Treating robot learning as a monolithic machine
problem and employing off-the-shelf approaches is unrealistic at
best. However, the physical world can yield important insights
into the inherent structure of control policies, state or action
spaces and reward functions. For example, many robot motor tasks
are also hierarchically structured decision tasks. For example, a
tennis playing robot has to combine different striking movements
sequentially. During locomotion there are at least three behaviors
simultaneously active as a robot has to combine its gait
generation with foot placement and balance control. First
domain-driven skill learning approaches have already yielded
impressive recent successes by incorporating such structural
insights into the learning process. Hence, a promising route to
more scalable policy learning approaches includes the automatic
exploitation of the environment's structure, resulting in new
structured learning approaches for robot control. <br>
<br>
Structured and hierarchical learning has been an important trend
in machine learning in recent years. In robotics, researchers
often ended up naturally at well-structured hierarchical policies
based on discrete-continuous partitions (e.g., define local
movement generators as well as a prioritized operational space
control for combining them) with nested control loops at several
different speeds (i.e., fast control loops for smooth and accurate
movement achievement, slower loops for model-predictive planning).
Furthermore, evidence from the fields cognitive sciences indicate
that humans also heavily exploit such structures and hierarchies.
Although such structures have been found in human motor control,
are favored in robot control and exist in machine learning, the
connections between these fields have not been well explored.
Transferring insights from structured prediction methods, which
make use of the inherent correlation in the data, to hierarchical
robot skill learning may be a crucial step. General approaches for
bringing structured policies, states, actions and rewards into
robot reinforcement learning may well be the key to tackle many
challenges of real-world robot environments and an important step
to the vision of intelligent autonomous robots which can learn
rich and versatile sets of motor skills. This workshop aims to
reveal how complex motor skills typically exhibit structures that
can be exploited for learning reward functions and to find
structure in the state or action space.<br>
<br>
In order to make progress towards the goal of structured learning
for robot control, this workshop aims at researchers from
different machine learning areas (such as reinforcement learning,
structured prediction), robotics and related disciplines (e.g.,
control engineering, and the cognitive sciences). <br>
<br>
We particularly want to focus on the following important topics
for structured robot learning which have a big overlap from
several of these fields:<br>
- Efficient representations and learning methods for
hierarchical policies<br>
- Learning in several layers of hierarchy<br>
- Structured representations for motor control and planning<br>
- Skill extraction and skill transfer<br>
- Sequencing and composition of behaviors<br>
- Hierarchical Bayesian Models for decision making and
efficient transfer learning<br>
- Low-dimensional manifolds as structured representations for
decision making<br>
- Exploiting correlations in the decision making process<br>
- Prioritized control policies in a multi-task reinforcement
learning setup<br>
<br>
These challenges are important steps to building intelligent
autonomous robots and may potentially motivate new research topics
in the related research fields.<br>
<br>
FORMAT: <br>
The aim of this workshop is to bring together researchers which
are interested<br>
in structured representations, reinforcement learning,
hierarchical learning methods and control architectures. <br>
Among these general topics we will focus on the following
questions:<br>
<br>
Structured representations:<br>
- How to efficiently use graphical models such as Markov
random fields to exploit correlations in the decision making
process?<br>
- How to extract the relevant structure (e.g. low dimensional
manifolds, factorizations...) from the state and action space?<br>
- Can we efficiently model structure in the reward function or
the system dynamics?<br>
- How to learn good features for the policy or the value
function?<br>
- What can we learn from structured prediction?<br>
<br>
Representations of behavior: <br>
- What are good representations for motor skills?<br>
- How can we efficiently reuse skills in new situations?<br>
- How can we extract movement skills and elemental movements
from demonstrations?<br>
- How can we compose skills to solve a combination of tasks?<br>
- How can we represent versatile motor skills?<br>
- How can we represent and exploit the correlations over time
in the decision process?<br>
<br>
Structured Control:<br>
- How to efficiently use structured representations for
planning and control?<br>
- Can we learn task-priorities and use similar policies as in
task-prioritized control?<br>
- How to decompose optimal control laws into elemental
movements ?<br>
- How to use low-dimensional manifolds to control
high-dimensional, redundant systems?<br>
- Can we use chain or tree-like structures as policy
representation to mimic the kinematic structure of the <br>
robot?<br>
<br>
Hierarchical Learning Methods: <br>
- How can we efficiently apply abstractions to the control
problem?<br>
- How to efficiently learn at several layers of hierarchy?<br>
- Which policy search algorithms are appropriate for which
hierarchical representation?<br>
- Can we use hierarchical inverse reinforcement learning to
acquire skill reward functions, and priors over selecting those
skills? <br>
- How can we decide when to create new skills or re-use known
ones?<br>
- How can we discover and generalize important sub-goals in
our movement plan?<br>
<br>
Skill Transfer:<br>
- How can we efficiently transfer skills to new situations?<br>
- Can we use hierarchical Bayesian models to learn in<br>
several layers of abstraction also in decision making?<br>
- How to transfer learned models or even value functions to
new tasks?<br>
<br>
<br>
IMPORTANT DATES <br>
June 1st - Deadline of submission for Posters <br>
June 4th - Notification of Poster Acceptance <br>
<br>
<br>
SUBMISSIONS <br>
Extended abstracts (1 pages) will be reviewed by the program
committee members on the basis of relevance, significance, and
clarity. Accepted contributions will be presented as posters but
particularly exciting work may be considered for talks.
Submissions should be formatted according to the conference
templates and submitted via email to <a
class="moz-txt-link-abbreviated"
href="mailto:toneumann@ias.tu-darmstadt.de">neumann@ias.tu-darmstadt.de</a>.
<br>
<br>
ORGANIZERS <br>
Gerhard Neumann, Technische Universitaet Darmstadt <br>
George Konidaris,
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
MIT Computer Science and Artificial Intelligence Laboratory<br>
Freek Stulp,
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
ENSTA - ParisTech<br>
Jan Peters, Technische Universitaet Darmstadt and Max Planck
Institute for Intelligent Systems <br>
<br>
LOCATION AND MORE INFORMATION <br>
The most up-to-date information about the workshop can be found on
the RSS 2013 webpage. <br>
<br>
</div>
</body>
</html>