Connectionists: IEEE TNNLS Special Issue on Deep Reinforcement Learning and Adaptive Dynamic Programming

Haibo He he at ele.uri.edu
Sat Jan 7 10:45:49 EST 2017


IEEE Transactions on Neural Networks and Learning Systems 
Special Issue on "Deep Reinforcement Learning and Adaptive Dynamic Programming” 

Call for Papers: 

In the first issue of Nature 2015, Google DeepMind published a paper “Human-level control through deep reinforcement learning”. Furthermore, in the first issue of Nature 2016, it published a cover paper “Mastering the game of Go with deep neural networks and tree search” again and proposed the computer Go program, AlphaGo. In March 2016, AlphaGo beat the world’s top Go player Lee Sedol by 4:1. This becomes a new milestone in artificial intelligence history, the core of which is the algorithm of deep reinforcement learning. 

Deep reinforcement learning is able to output control signal directly based on input images, which incorporates both the advantages of the perception of deep learning and the decision making of reinforcement learning (RL) or adaptive dynamic programming (ADP). This mechanism makes the artificial intelligence much closer to human thinking modes. Deep RL or ADP has achieved remarkable success in terms of theory and application since it is proposed. Successful applications cover video games, Go, robotics, smart driving, healthcare, and so on. 

However, it is still an open problem of the theoretical analysis on deep RL or ADP, e. g., the convergence, stability, and optimality. The learning efficiency needs to be improved by proposing new algorithms or combined with other methods. More practical demonstrations are encouraged to be presented. Therefore, the aim of this special issue is to call for the most advanced research and state-of-the-art works in the field of deep RL or ADP. All the original papers related are welcome. Specific topics of interest include but are not limited to:

•	New algorithms of deep RL or ADP;
•	Theory of deep RL or ADP;
•	Deep RL or ADP with transfer learning;
•	Deep RL or ADP with advanced search algorithms;
•	Multi-agent RL or ADP; 
•	Hierarchical RL or ADP; 
•	Event-driven RL or ADP; 
•	Theoretical foundation of RL or ADP in convergence, stability, robustness, and etc. ; 
•	Data-driven learning and control; 
•	Control with advanced machine learning;
•	Optimal decision and control of cyber-physical systems;
•	Autonomous decision and control using neural structures;
•	Brain-like control design and applications;
•	New neural network topologies from neurocognitive psychology studies;
•	Neurocomputing structures for fast decision and control in dynamic environments;
•	Applications in realistic and complicated systems.

IMPORTANT DATES 

30 March 2017 – Deadline for manuscript submission 
30 June 2017 – Notification of authors
30 July 2017 – Deadline for submission of revised manuscripts 
30 September 2017 – Final decision of acceptance
November 2017- tentative publication date

GUEST EDITORS 

D. Zhao, Institute of Automation, Chinese Academy of Sciences, China.
D. Liu, Universtiy of Science and Technology, China. 
F. L. Lewis, University of Texas at Arlington, USA.
J. Principe, University of Florida, Gainesville, USA. 
R. Babuska, Delft University of Technology, the Netherlands.

SUBMISSION INSTRUCTIONS

1. Read the information for Authors at http://cis.ieee.org/tnnls.
2. Submit your manuscript at the TNNLS webpage (http://mc.manuscriptcentral.com/tnnls) and follow the submission procedure. Please, clearly indicate on the first page of the manuscript and in the cover letter that the manuscript is submitted to the special issue on Deep Reinforcement Learning and Adaptive Dynamic Programming. Send an email to guest editor D. Zhao (dongbin.zhao at ia.ac.cn) with subject “TNNLS special issue submission” to notify about your submission.
3. Early submissions are welcome. We will start the review process as soon as we receive your contributions.



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20170107/a45be026/attachment.html>


More information about the Connectionists mailing list