<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
3rd CALL FOR PAPERS - important news on TRAVEL SCHOLARSHIPS<br>
<br>
NIPS 2014 WORKSHOP on "Autonomously Learning Robots"<br>
<br>
===========================================================<br>
<br>
We are happy to announce that we can offer travel scholarships for
students that want to attend the workshop. The scholarship will be
at maximum 500€ (approx. 630$) per student and will be handed out to
the best 10 student submissions. Please indicate in the email of
your submission whether the first author is a student and you want
to apply for the scholarship.<br>
<br>
<br>
== Quick Facts ==<br>
<br>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<br>
Call For Papers:<br>
Authors can submit a 2-6 pages paper that will be reviewed by the
organization committee. The papers can present new work or give a
summary of recent work of the author(s). All papers will be
considered for the poster sessions. Out-standing long papers (4-6
pages) will also be considered for a 20 minutes oral presentation.
Submissions should be send per email to <a
class="moz-txt-link-abbreviated"
href="mailto:autonomous.learning.robots@gmail.com">autonomous.learning.robots@gmail.com</a>
with the prefix [ALR-Submission].<br>
<br>
Important Dates:<br>
* 1st Call for Papers: August, 26th, 2014<br>
* Paper submission deadline: October, 3rd, 2014 (23:59 PST)<br>
* Paper acceptance notification: October, 27th, 2014<br>
* Camera-ready deadline: November, 30th, 2014<br>
<br>
Conference: NIPS 2014 (<a href="http://nips.cc/Conferences/2014/">http://nips.cc/Conferences/2014/</a>)<br>
Location: Montreal, Canada<br>
Homepage: <a
href="http://www.ias.tu-darmstadt.de/Workshops/NIPS2014">http://www.ias.tu-darmstadt.de/Workshops/NIPS2014</a><br>
<br>
Organizers: <br>
Gerhard Neumann ( <a
href="http://www.ias.tu-darmstadt.de/Team/GerhardNeumann">http://www.ias.tu-darmstadt.de/Team/GerhardNeumann</a>)<br>
Joelle Pineau (<a href="http://www.cs.mcgill.ca/%7Ejpineau/">http://www.cs.mcgill.ca/~jpineau/</a>),
<br>
Peter Auer (<a href="http://personal.unileoben.ac.at/auer/">http://personal.unileoben.ac.at/auer/</a>)<br>
Marc Toussaint (<a
href="http://ipvs.informatik.uni-stuttgart.de/mlr/marc/">http://ipvs.informatik.uni-stuttgart.de/mlr/marc/</a>)<br>
<br>
Topics: <br>
- More Autonomous Reinforcement Learning for Robotics<br>
- Autonomous Sub-Goal Extraction<br>
- Bayesian Parameter and Model Selection<br>
- Active Search and Autonomous Exploration<br>
- Autonomous Feature Extraction, Kernel Methods and Deep Learning
for Robotics<br>
- Learning from Human Instructions, Inverse Reinforcement
Learning and Preference Learning for Robotics<br>
- Generalization of Skills with Multi-Task Learning<br>
- Learning Forward Models and Efficient Model-Based Policy Search<br>
- Learning to Exploit the Structure of Control Tasks<br>
- Movement Primitives and Modular Control Architectures<br>
<br>
== Abstract ==<br>
<br>
To autonomously assist human beings, future robots have to
autonomously learn a rich set of complex behaviors. So far, the role
of machine learning in robotics has been limited to solve
pre-specified sub-problems that occur in robotics and, in many
cases, off-the-shelf machine learning methods. The approached
problems are mostly homogeneous, e.g., learning a single type of
movement is sufficient to solve the task, and do not reflect the
complexities that are involved in solving real-world tasks. <br>
<br>
In a real-world environment, learning is much more challenging than
solving such homogeneous problems. The agent has to autonomously
explore its environment and discover versatile behaviours that can
be used to solve a multitude of different tasks throughout the
future learning progress. It needs to determine when to reuse
already known skills by adapting, sequencing or combining the
learned behaviour and when to learn new behaviours. To do so, it
needs to autonomously decompose complex real-world tasks into
simpler sub-tasks such that the learned solutions for these
sub-tasks can be re-used in a new situation. It needs to form
internal representations of its environment, which is possibly
containing a large variety of different objects or also different
agents, such as other robots or humans. Such internal
representations also need to shape the structure of the used policy
and/or the used value function of the algorithm, which need to be
flexible enough such to capture the huge variability of tasks that
can be encountered in the real world. Due to the multitude of
possible tasks, it also cannot rely on a manually tuned reward
function for each task, and, hence, it needs to find a more general
representations for the reward function. Yet, an autonomous robot is
likely to interact with one or more human operators that are
typically experts in a certain task, but not necessarily experts in
robotics. Hence, an autonomously learning robot also should make
effective use of feedback that can be acquired from a human
operator. <br>
Typically, different types of instructions from the human are
available, such as demonstrations and evaluative feedback in form of
a continuous quality rating, a ranking between solutions or a set of
preferences. In order to facilitate the learning problem, such
additional human instructions should be used autonomously whenever
available. Yet, the robot also needs to be able to reason about its
competence to solve a task. If the robot thinks it has poor
competence or the uncertainty of the competence is high, the robot
should request more instructions from the human expert.<br>
<br>
Most machine learning algorithms are missing these types of
autonomy. They still rely on a large amount of engineering and
fine-tuning from a human expert. The human typically needs to
specify the representation of the reward-function, of the state, of
the policy or of other internal representations used by the learning
algorithms. Typically, the decomposition of complex tasks into
sub-tasks is performed by the human expert and the parameters of
such algorithms are fine tuned by hand. The algorithms typically
learn from a pre-specified source of feedback and can not
autonomously request more instructions such as demonstrations,
evaluative feedback or corrective actions. We belief that this lack
of autonomy is one of the key reasons why robot learning could not
be scaled to<br>
more complex, real world tasks. Learning such tasks would require a
huge amount of fine tuning which is very costly on real robot
systems. <br>
<br>
== Goal ==<br>
<br>
In this workshop, we want to bring together people from the fields
of robotics, reinforcement learning, active learning, representation
learning and motor control. The goal in this multi-disciplinary
workshop is to develop new ideas to increase the autonomy of current
robot learning algorithms and to make their usage more practical for
real world applications. In this context, among the questions which
we intend to tackle are<br>
<br>
More Autonomous Reinforcement Learning<br>
- How can we automatically tune hyper-parameters of reinforcement
learning algorithms such as learning and exploration rates?<br>
- Can we find reinforcement learning algorithms that are less
sensitive to the settings of their hyper-parameters and therefore,
can be used for a multitude of tasks with the same parameter values?<br>
- How can we efficiently generalize learned skills to new
situations?<br>
- Can we transfer the success of deep learning methods to robot
learning?<br>
- How do learn on several levels of abstractions and also identify
useful abstractions?<br>
- How can we identify useful elemental behaviours that can be used
for a multitude of tasks?<br>
- How do use RL on the raw sensory input without a hand-coded
representation of the state?<br>
- Can we learn forward models of the robot and its environment from
high dimensional sensory data? How can these forward models be used
effectively for model-based reinforcement learning?<br>
- Can we autonomously decide when to learn value functions and when
to use direct policy search?<br>
<br>
Autonomous Exploration and Active Learning<br>
- How can we autonomously explore the state space of the robot
without the risk of breaking the robot?<br>
- Can we use strategies for intrinsic motivation, such as artificial
curiosity or empowerment, to autonomously acquire a rich set of
behaviours that can be re-used in the future learning progress?<br>
- How can we measure the competence of the agent as well as our
certainty in this competence? <br>
- Can we use active learning to acquire improve the quality of
learned forward models as well as to probe the environment to gain
more information about the state of the environment? <br>
<br>
Autonomous Learning from Instructions<br>
- Can we combine learning from demonstrations, inverse reinforcement
learning and preference learning to make more effective use of human
instructions?<br>
- How can we decide when to request new instructions from a human
experts?<br>
- How can we scale inverse reinforcement learning and preference
learning to high dimensional continuous spaces?<br>
- Can we use demonstrations and human preferences to identify
relevant features from the high dimensional sensory input of the
robot?<br>
<br>
Autonomous Feature Extraction<br>
- Can we use feature extraction techniques such as deep learning to
find a general purpose feature representation that can be used for a
multitude of tasks. <br>
- Can recent advances for kernel based methods be scaled to
reinforcement learning and policy search in high dimensional spaces?<br>
- What are good priors to simplify the feature extraction problem?<br>
- What are good features to represent the policy, the value function
or the reward function? Can we find algorithms that extract features
specialized for these representations?<br>
<br>
<br>
== Format ==<br>
<br>
The workshop is designed to be a platform for presentations and
discussion including the invited speakers, oral presentations of
paper submissions and poster submissions. The scope of the workshop
includes all all areas connected to autonomous robot learning,
including reinforcement learning, exploration strategies, Bayesian
learning for adjusting hyper-parameters, representation learning,
structure learning and learning from human instructions. There will
be a poster session where interested authors in the topic can
present their recent work at the workshop. The authors have to
submit a two page abstract which can present new work, or a summary
of the recent work of the authors (6 pages) or also present new
ideas for the proposed topics. The workshop will consist of seven
plenary invited talks (30 minutes each) and short talks from
selected abstract submissions. All accepted posters will be
presented at two poster sessions (min. 60 minutes each).<br>
<br>
<br>
<br>
-- <br>
You received this message because you are subscribed to the Google
Groups "Machine Learning News" group.<br>
To unsubscribe from this group and stop receiving emails from it,
send an email to <a
href="mailto:ml-news+unsubscribe@googlegroups.com">ml-news+unsubscribe@googlegroups.com</a>.<br>
For more options, visit <a
href="https://groups.google.com/d/optout">https://groups.google.com/d/optout</a>.<br>
</body>
</html>