Connectionists: SIGIR07 Workshop on Learning to Rank for IR

Thorsten Joachims tj at cs.cornell.edu
Wed May 2 10:33:52 EDT 2007


                           Call for Papers

     Learning to Rank for Information Retrieval 2007 (LR4IR'07)

Overview

The task of "learning to rank" has emerged as an active and growing area of
research both in information retrieval and machine learning. The goal is to
design and apply methods to automatically learn a function from training
data, such that the function can sort objects (e.g., documents) according to
their degrees of relevance, preference, or importance as defined in a
specific application. 

The relevance of this task for IR is without question, because many IR
problems are by nature ranking problems. Improved algorithms for learning
ranking functions promise improved retrieval quality and less of a need for
manual parameter adaptation. In this way, many IR technologies can be
potentially enhanced by using learning to rank techniques. 

The main purpose of this workshop, in conjunction with SIGIR 2007, is to
bring together IR researchers and ML researchers working on or interested in
the technologies, and let them to share their latest research results, to
express their opinions on the related issues, and to discuss future
directions.

Topics of Interests

We solicit submissions on any aspect of learning to rank for information
retrieval. Particular areas of interest include, but are not limited to:  
- Models, features, and algorithms of learning to rank
- Evaluation methods for learning to rank
- Data creation methods for learning to rank
- Applications of learning to rank methods to information retrieval
- Comparison between traditional approaches and learning approaches to
ranking
- Theoretical analyses on learning to rank
- Empirical comparison between learning to rank methods

Shared Benchmark Data

Several shared data sets have been released from Microsoft Research Asia
(http://research.microsoft.com/users/tyliu/LETOR/). The data sets, created
based on OHSUMED and TREC data, contain features and relevance judgments for
training and evaluation of learning to rank methods. It is encouraged to use
the data sets to conduct experiments in the submissions to the workshop.

Paper Submission

Papers should be submitted electronically via the workshop web site.
http://research.microsoft.com/users/LR4IR-2007/. Detailed information on
submission will be available at the site. All submissions will be reviewed by
at least three members of the program committee, and all accepted papers will
be published in the proceedings of the workshop. The proceedings will be
printed and made available at the workshop.

Important Dates

Paper Submission Due:        June 8
Author Notification Date:    June 28 
Camera Ready:                July 5

Organizers:

Thorsten Joachims, Cornell Univ.
Hang Li, Microsoft Research Asia
Tie-Yan Liu, Microsoft Research Asia
ChengXiang Zhai, Univ. of Illinois at Urbana-Champaign

PC Members:

Eugene Agichtein, Emory University
Javed Aslam, Northeastern University
Chris Burges, Microsoft Research
Olivier Chapelle, Yahoo Research
Hsin-Hsi, Chen, National University of Taiwan 
Bruce Croft, University of Massachusetts, Amherst 
Ralph Herbrich, Microsoft Research Cambridge 
Djoerd Hiemstra, University of Twente 
Thomas Hofmann, Google 
Rong Jin, Michigan State University 
Paul Kantor, Rutgers University 
Sathiya Keerthi, Yahoo Research 
Ravi Kumar, Yahoo Research 
Quov Le, Australian National University 
Guy Lebanon, Prudue University 
Donald Metzler, University Massachusetts 
Einat Minkov, Carnegie Mellon University 
Filip Radlinski, Cornell University 
Mehran Sahami, Google 
Robert Schapire, Princeton University 
Michael Taylor, Microsoft Research Cambridge 
Yiming Yang, Carnegie Mellon University 
Kai Yu, NEC Research Institute 
Hongyuan Zha, Georgia Tech 
Yi Zhang, University of California, Santa Cruz


More information about the Connectionists mailing list