Connectionists: CFP: Special session on Combining Evolutionary Computation and Reinforcement Learning at CEC 2015
M.A.Wiering
m.a.wiering at rug.nl
Wed Nov 5 08:53:09 EST 2014
Special session for
mdrugan at vub.ac.be(http://sites.ieee.org/cec2015/" style="margin: 0px; padding: 0px; border: 0px; outline: 0px; vertical-align: baseline; background-color: transparent; color: rgb(1, 127, 141); text-decoration: none;" target="_blank" title="2015 IEEE Congress on Evolutionary Computation (CEC2015)">2015 IEEE Congress on Evolutionary Computation (CEC2015):
Combining Evolutionary Computation and Reinforcement Learning
http://sites.ieee.org/cec2015/
Held during May 25-28, 2015, at Sendai International Center, Sendai, Japan.
Paper Submission Due:December 19, 2014
Evolutionary Computation (EC) and Reinforcement Learning (RL) are two research fields in the area of search, optimization, and control. RL addresses sequential decision making problems in initially unknown stochastic environments, involving stochastic policies and unknown temporal delays between actions and observable effects. EC studies algorithms that can optimize some fitness function by searching for the optimal set of parameter values. RL can quite easily cope with stochastic environments, which is more complex with traditional EC methods. The main strengths of EC techniques are their general applicability to solving many different kinds of optimization problems and their global search behavior enabling these methods not to get easily trapped in local optima. There also exist EC methods that deal with adaptive control problems such as classifier systems and evolutionary reinforcement learning. Such methods address basically the same problem as in RL, i.e. the maximization of the agent's reward in a potentially unknown environment that is not always completely observable. Still, the approach taken by these methods are different and complementary. RL is used for learning the parameters of a single model using a fixed representation of the knowledge and learns to improve its value function from the reward given after every step taken in an environment. EC is usually a population based optimizer that uses a fitness function to rank individuals based on their total performance in the environment and uses different operators to guide the search. These two research fields can benefit from an exchange of ideas resulting in a better theoretical understanding and/or empirical efficiency.
Aim and scope
The main goal of this special session is to solicit research on frontiers and potential synergies between evolutionary computation and reinforcement learning. We encourage submissions describing applications of EC for optimizing agents in difficult environments that are possibly dynamic, uncertain and partially observable, like in games, multi-agent applications such as scheduling, and other real-world applications. Ideally, this special session will gather research papers with a background in either RL or EC that propose new challenges and ideas as result of synergies between RL and EC.
Topics of interests
We enthusiastically solicit papers on relevant topics such as:
· Novel frameworks including both evolutionary algorithms and RL
· Comparisons between RL and EC approaches to optimize the behavior of agents in specific environments
· Parameter optimization of EC methods using RL or vice versa
· Adaptive search operator selection using reinforcement learning
· Optimization algorithms such as meta-heuristics, evolutionary algorithms for dynamic and uncertain environments
· Theoretical results on the learnability in dynamic and uncertain environments
· On-line self-adapting systems or automatic configuration systems
· Solving multi-objective sequential decision making problems with EC/RL
· Learning in multi-agent systems using hybrids between EC and RL
· Learning to play games using optimization techniques
· Real-world applications in engineering, business, computer science, biological sciences, scientific computation, etc. in dynamic and uncertain environments solved with evolutionary algorithms
· Solving dynamic scheduling and planning problems with EC and/or RL
Organizers
Madalina M. Drugan ( Artificial Intelligence Lab, Vrije Universiteit Brussel, Pleinlaan 2, 1050, Brussels, Belgium
Bernard Manderick (Bernard.Manderick at vub.ac.be) Artificial Intelligence Lab, Vrije Universiteit Brussel, Pleinlaan 2, 1050, Brussels, Belgium
Marco A. Wiering (m.a.wiering at rug.nl) Institute of Artificial Intelligence and Cognitive Engineering, University of Groningen, Nijenborgh 9, 9700AK Groningen, The Netherlands
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20141105/d213cc8b/attachment.html>
More information about the Connectionists
mailing list