Connectionists: [NIPS 2006] Call for Posters & Participation for the Workshop: Towards a New Reinforcement Learning?
Jan Peters
jrpeters at usc.edu
Tue Oct 31 17:50:05 EST 2006
==============================================
NIPS 2006 Workshop
CALL FOR POSTERS & PARTICIPATION
Towards a New Reinforcement Learning?
http://www.jan-peters.net/Research/NIPS2006
Whistler, CANADA: December 8, 2006
==============================================
Abstract
=======
During the last decade, many areas of statistical machine learning have reached a high level of
maturity with novel, efficient, and theoretically well founded algorithms that increasingly removed
the need for heuristics and manual parameter tuning, which dominated the early days of neural net-
works. Reinforcement learning (RL) has also made major progress in theory and algorithms, but is
somehow lagging behind the success stories of classification, supervised, and unsupervised learn-
ing. Besides the long-standing question for scalability of RL to larger and real world problems,
even in simpler scenarios, a significant amount of manual tuning and human insight is needed to
achieve good performance, e.g., as in exemplified in issues like eligibility factors, learning rates,
the choice of function approximators and their basis functions for policy and/or value functions,
etc. Some of the reasons for the progress of other statistical learning disciplines comes from con-
nections to well-established fundamental learning approaches, like maximum-likelihood with EM,
Bayesian statistics, linear regression, linear and quadratic programming, graph theory, function
space analysis, etc. Therefore, the main question of this workshop is to discuss, how other statisti-
cal learning techniques may be used to developed new RL approaches in order to achieve properties
including higher numerical robustness, easier use in terms of open parameters, probabilistic and
Bayesian interpretations, better scalability, the inclusions of prior knowledge, etc.
Format
======
Our goal is to bring together researchers who have worked on reinforcement learning techniques
which are heading towards new approaches in terms of bringing other statistical learning tech-
niques to bear on RL. The workshop will consist of short presentations, posters, and panel discus-
sions. Topics to be addressed include, but are not limited to:
• Which methods from supervised and unsupervised learning are the most promising to help
developing new RL approaches?
• How can modern probabilistic and Bayesian method be beneficial for Reinforcement Learn-
ing?
• Which approaches can help reducing the number of open parameters in Reinforcement
Learning?
• Can the Reinforcement Learning Problem be reduced to Classification or Regression?
• Can reinforcement learning be seen as a big filtering or prediction problem where the pre-
diction of good actions is the main objective?
• Are there useful alternative ways to formulate the RL problem? E.g, as a dynamic Bayesian
network, by using multiplicative rewards, etc.
• Can reinforcement learning be accelerated by incorporating biases, expert data from demon-
stration, prior knowledge on reward functions, etc.?
Invited Talks (tentative)
===================
Game theoretic learning and planning algorithms, Geoff Gordon
Reductive Reinforcement Learning, John Langford
The Importance of Measure in Reinforcement Learning, Sham Kakade
Sample Complexity Results for Reinforcement Learning in Large State Spaces, Csaba Szepesvari
Policies Based on Trajectory Libraries, Martin Stolle
Towards Bayesian Reinforcement Learning, Pascal Poupart
Bayesian Policy Gradient Algorithms, Mohammed Ghavamsadeh
Bayesian RL for Partially Observable Domains, Joelle Pinau
Bayesian Reinforcement Learning with Gaussian Processes, Yaakov Engel
>From Imitation Learning to Reinforcement Learning, Nathan Ratliff
Graphical Models for Imitation: A New Approach to Speeding up RL, Deepak Verma
Apprenticeship learning and robotic control, Andrew Ng
Variational Methods for Stochastic Optimization: A Unification of Population-Based Methods, Mark Andrews
Probabilistic inference for solving structured MDPs and POMDPs, Marc Toussaint
Poster Submission Instructions
========================
If you would like to present a poster at this workshop, please send an email to
Jan Peters (jrpeters at usc.edu) no later than 13th November, 2006, specifying:
-> Title
-> Presenter and affiliation
-> A short abstract with one or two references
We intend to create an edited book with contributions of people who have
presented at our workshop. We would be delighted if you would indicate
whether you are interested to add a chapter/section to such a book?
Dates & Deadlines for Poster Submissions
=================================
November 13: Abstract Submission
November 15: Acceptance Notification
Organizing Committee
=====================
Jan Peters
University of Southern California
Drew Bagnell
Carnegie Mellon University
Stefan Schaal
University of Southern California
More information about the Connectionists
mailing list