CFP: MLJ special issue

Dale Schuurmans dale at logos.math.uwaterloo.ca
Mon May 22 15:10:03 EDT 2000


                         Call for Papers 

                    MACHINE LEARNING Journal 
                        Special Issue on

     NEW METHODS FOR MODEL SELECTION AND MODEL COMBINATION


GUEST EDITORS:

  Yoshua Bengio, Universit de Montral
  Dale Schuurmans, University of Waterloo


SUBMISSION DEADLINE:

  July 31, 2000 (electronic submission in pdf or postscript format)


A fundamental tradeoff in machine learning and statistics is the
under-fitting versus over-fitting dilemma: When inferring a predictive
relationship from data one must typically search a complex space of
hypotheses to ensure that a good predictive model is available, but
must simultaneously restrict the hypothesis space to ensure that good
candidates can be reliably distinguished from bad.  That is, the
learning problem is fundamentally ill-posed; several functions might
fit a given set of data but behave very differently on further data
drawn from the same distribution.  A classical approach to coping with
this tradeoff is to perform "model selection" where one imposes a
complexity ranking over function classes and then optimizes a combined
objective of class complexity and data fit.  In doing so, however, it
would be useful to have an accurate estimate of the expected
generalization error at each complexity level so that the function
class with the lowest expected error could be selected, or functions
from the classes with lowest expected error could be combined, and so
on.  Many approaches have been proposed for this purpose in both the
statistics and the machine learning research communities.

Recently in machine learning and statistics there has been renewed
interest in techniques for evaluating generalization error, for
optimizing generalization error, and for combining and selecting
models.  This is exemplified, for instance, by recent work on
structural risk minimization, support vector machines, boosting
algorithms, and the bagging algorithm.  These new approaches suggest
that better generalization performance can be obtained using new,
broadly applicable procedures.  Progress in this area has not only
been important for improving our understanding of how machine learning
algorithms can generalize effectively, it has already proven its value
in real applications of machine learning and data analysis.
 
We seek submissions that cover any of these new areas of predictive
model selection and combination.  We are particularly interested in
papers that present current work on boosting, bagging, and Bayesian
model combination techniques, as well as work on model selection,
regularization, and other automated complexity control methods.
Papers can be either theoretical or empirical in nature; our primary
goal is to collect papers that shed new light on existing algorithms
or propose new algorithms that can be shown to exhibit superior
performance under identifiable conditions.  The key evaluation
criteria will be insight and novelty.
 
This special issue Machine Learning follows from a successful workshop
held on the same topic at the Universit de Montral in April, 2000.
This workshop brought together several key researchers in the fields
of machine learning and statistics to discuss current research issues
on boosting algorithms, support vector machines, and model selection
and regularization techniques.  Further details about the workshop can
be found at www.iro.umontreal.ca/~bengioy/crmworkshop2000.


SUBMISSION INSTRUCTIONS:

Papers should be sent by email to dale at cs.uwaterloo.ca by July 31,
2000.  The preferred format for submission is PDF or Postscript.
(Please be sure to embed any special fonts.)  If electronic submission
is not possible, then a hard copy can be sent to:

  Dale Schuurmans
  Department of Computer Science
  200 University Avenue West
  University of Waterloo
  Waterloo, Ontario N2L 3G1
  Canada
  (519) 888-4567 x6769 (for courier delivery)





More information about the Connectionists mailing list