Connectionists: CFP: Machine Learning Journal Special Issue: Empirical Evaluations in Reinforcement Learning

Shimon Whiteson shimon.whiteson at gmail.com
Thu Aug 27 04:34:30 EDT 2009


Call For Papers: Machine Learning Journal Special Issue
Empirical Evaluations in Reinforcement Learning
Submission Deadline: February 26, 2010
Guest Editors: Shimon Whiteson and Michael Littman
The continuing development of a field requires a healthy exchange  
between theoretical advances and experimental observations.  The  
purpose of this special issue is to assess progress in empirical  
evaluations of reinforcement-learning algorithms and to encourage the  
adoption of effective experimental methodologies.  The last several  
years have seen new trends in uniform software interfaces between  
environments and learning algorithms, community comparisons and  
competitions, and an increased interest in experimenting with  
reinforcement learning in embedded systems.  We enthusiastically  
solicit papers on relevant topics such as:

* The design and dissemination of standardized frameworks and  
repositories for algorithms, methods, and/or results.
* Experience of organizers and participants in reinforcement-learning  
competitions and bake-offs.
* Novel evaluation methodologies or metrics.
* Careful empirical comparisons of existing methods.
* Novel methods validated with strong empirical results on existing  
benchmarks, especially those used in recent RL Competitions (see http://www.rl-competition.org/) 
.
* Applications of reinforcement-learning approaches to real-life  
environments such as computer networks, system management and robotics.
* Theoretical work such as sample complexity bounds that can be used  
to guide the design of benchmarks and evaluations.

The emphasis of the special issue is not on the development of novel  
algorithms.  Instead, papers will be assessed in terms of the insights  
they provide about how best to assess performance in reinforcement  
learning, i.e., the "meta" problem of evaluating the evaluation  
methodologies themselves.  In particular, papers presenting empirical  
results should also discuss what those results reveal about the  
strengths and weaknesses of the evaluation methodology.  Similarly,  
papers describing real-life applications should make clear what  
limitations the application exposes in 'off-the-shelf' methods, how  
the employed method had to be modified to address real-world  
complications, and what the results show that could not be learned  
from experiments in 'toy' domains.  Papers proposing new evaluation  
methodologies should include illustrative empirical results offering  
insights that would be difficult to obtain with conventional  
methodologies.  Finally, papers proposing new evaluation methodologies  
should also compare and contrast with methodologies in related areas,  
e.g. supervised learning, explaining why such methodologies are not  
adequate and what ideas, if any, can be borrowed from them.

For more details see: http://www.springer.com/cda/content/document/cda_downloaddocument/CFP_10994_2009826.pdf?SGWID=0-0-45-791198-p35726603
  


More information about the Connectionists mailing list