Connectionists: RL Benchmark announcement

Michael L. Littman mlittman at cs.rutgers.edu
Wed Sep 21 09:50:34 EDT 2005


The organizers of the NIPS-05 workshop "Reinforcement Learning
Benchmarks and Bake-offs II" would like to announce the first RL
benchmarking event.  We are soliciting participation from researchers
interested in implementing RL algorithms for learning to maximize
reward in a set of simulation-based tasks.

Our benchmarking set will include:
 * continuous-state MDPs
 * discrete factored-state MDPs
It will not include:
 * partially observable MDPs

Comparisons will be performed through a standardized interface
(details to be announced) and we *highly* encourage a wide variety of
approaches to be included in the comparisons.  We hope to see
participants executing algorithms based on temporal difference
learning, evolutionary search, policy gradient, TD/policy search
combinations, and others.

We do not intend to declare a "winner", but we do hope to foster a
culture of controlled comparison within the extended community
interested in learning for control and decision making.

If you are interested in participating, please contact Michael Littman
<mlittman at cs.rutgers.edu> to be added to our mailing list.  Additional
information will be available at our website at:

	http://www.cs.rutgers.edu/~mlittman/topics/nips05-mdp/ .

Sincerely,
  The RL Benchmarking Event Organizers



More information about the Connectionists mailing list