Connectionists: CfP: Deep Reinforcement Learning Workshop @IJCAI 2016

Sarath Chandar sarathcse2008 at gmail.com
Thu Feb 18 20:13:34 EST 2016


*****************************************************
IJCAI 2016 Workshop: Deep Reinforcement Learning: Frontiers and Challenges
New York City, New York, USA
https://sites.google.com/site/deeprlijcai16/
*****************************************************

IMPORTANT DATES
-----------------------------
Submission Deadline: April 20, 2016
Author Notification: May 20, 2016
Workshop: One of the days in July 9-11, 2016

KEYNOTE SPEAKERS
-----------------------------
-Remi Munos (Google DeepMind)
-Joelle Pineau (McGill University)
-Doina Precup (McGill University)
-David Silver (Google DeepMind)
-Satinder Singh (University of Michigan)
-Peter Stone (University of Texas, Austin)

OVERVIEW
-----------------------------
There has been a resurgence of neural network models, with some additional
new techniques, under the rubric of “deep learning”. Recent studies in
computer vision, natural language processing, reinforcement learning, and
speech recognition have amply demonstrated the potential of deep learning
techniques as powerful ways of learning representations at multiple spatial
and temporal scales. Reinforcement learning has traditionally used
feedforward neural networks to approximate the value function, for example
in the classic TD-GAMMON program in the early 1990s. Recent work by
DeepMind and others have shown that deep learning techniques can enable the
learning of complex tasks, such as Atari games and real-world control tasks
carried out by robots. In the other direction, REINFORCE is being used in
several deep learning models to learn complex tasks like image
classification, and image description. It is very exciting to see that both
the fields contribute to each other. In this workshop, we will focus on
various ways in which representation learning, and reinforcement learning
interact. This workshop will focus on Deep Reinforcement Learning, where DL
is helpful in learning representations for RL, and Reinforced Deep
Learning, where RL is helpful in training Deep Neural Networks. The aim of
this workshop is to bring researchers from both the fields together and
discuss new challenging applications which requires both Deep Learning and
Reinforcement Learning.

TOPICS
-----------------------------
We are looking for contributed papers that apply Deep Learning to
Reinforcement Learning and Reinforcement Learning to Deep Learning. We are
interested in both application-oriented papers as well as more fundamental
algorithmic/theoretical studies.

A sample list of relevant topics:
-Novel Deep Reinforcement Learning algorithms
-Deep Hierarchical Reinforcement Learning
-Reinforcement Learning for Vision/NLP
-Reinforcement Learning for training Deep Networks
-Deep Reinforcement Learning for Control
-Deep Reinforcement Learning for Robotics

SUBMISSIONS
-----------------------------
Authors should submit an extended abstract between 4 and 6 pages (including
references). Submitted abstracts may be a shortened version of a longer
paper or technical report, in which case the longer paper should be
referred from the submission. Reviewers will be asked to judge the
submission solely based on the submitted extended abstract. We also
encourage submission of relevant work in progress.

All submissions must be in PDF format, and authors should follow the style
guidelines of IJCAI 2016 given in:
http://ijcai-16.org/downloads/FormattingGuidelinesIJCAI-16.zip

Submissions must be made through easychair:
https://easychair.org/conferences/?conf=deeprl16
Submissions will be reviewed for relevance, quality and novelty. All
accepted submissions will be presented as talks and/or posters at the
workshop.

ORGANIZERS
-----------------------------
Sarath Chandar (anbilpas at iro.umontreal.ca)
Sridhar Mahadevan (mahadeva at cs.umass.edu)
Balaraman Ravindran (ravi at cse.iitm.ac.in)
Gerald Tesauro (gtesauro at us.ibm.com)

-- 
Sarath Chandar A P

http://www.sarathchandar.in/

There is nothing more practical than a good theory. -- Kurt Lewin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20160218/e0a4e590/attachment.html>


More information about the Connectionists mailing list