NIPS program committee notes
Michael Jordan
jordan at psyche.mit.edu
Wed Sep 25 16:56:32 EDT 1996
Dear connectionists colleagues,
I enclose below some notes on this year's NIPS reviewing and
decision process. These notes will hopefully be of interest
not only to contributors to NIPS*96, but to anyone else who
has an ongoing interest in the conference.
Note also that there is a "feedback session with the NIPS
board" scheduled for Wednesday, December 4th at the conference
venue; this would be a good opportunity for public discussion
of NIPS reviewing and decision policies. In my experience NIPS
has worked hard to earn its role as a flagship conference serving
a diverse technical community, particularly through its revolving
program committees, and further public discussion of NIPS decision-
making procedures can only help to improve the conference.
The notes include lists of all of this year's area chairs and
reviewers.
Mike Jordan
NIPS*96 program chair
-----------------------------------------------------------
The area chairs for NIPS*96 were as follows:
Algorithms and Architectures
Chris Bishop, Aston University
Steve Omohundro, NEC Research Institute
Rob Tibshirani, University of Toronto
Theory
Michael Kearns, AT&T Research
Sara Solla, AT&T Research
Vision
David Mumford, Harvard University
Control
Andrew Moore, Carnegie Mellon University
Applications
Anders Krogh, The Sanger Centre
Speech and Signals
Eric Wan, Oregon Graduate Institute
Neuroscience
Bill Bialek, NEC Research Institute
Artificial Intelligence/Cognitive Science
Stuart Russell, University of California, Berkeley
Implementations
Fernando Pineda, Johns Hopkins University
The area chairs were responsible for recruiting reviewers.
All told, 160 reviewers were recruited, from 17 countries.
104 reviewers were from institutions in the US, and 56
reviewers were from institutions outside the US.
The breakdown of the submissions by areas was as follows:
1995 1996
----------------------------------------
Alg & Arch 133 173
Theory 89 79
Neuroscience 43 61
Control & Nav 40 43
Applications 36 42
Vision 46 40
Speech & Sig Proc 20 25
Implementations 25 24
AI & Cog Sci 30 22
----------------------------------------
Total 462 509
Area chairs assigned papers to reviewers. For cases in which
an area chair was an author of a paper the program chair made
the selection of reviewers. For cases in which the program chair
was an author of a submission the appropriate area chair made the
selection of reviewers. Code letters were used for all such
reviewers, and neither the area chairs nor the program chair
knew (or know) who reviewed their papers.
Each paper was reviewed by three reviewers. In most cases
all three reviewers were from the same area, but some papers
that were particularly interdisciplinary in flavor were
reviewed by reviewers from different areas.
After the reviews were received and processed the program committee
met at MIT in August to make decisions. A few comments on the
meeting way the meeting was run:
(1) It was agreed that the overriding goal of the program committee's
decision process should be to select the best papers, i.e., those
exhibiting the most significant thinking and the most thorough development
of ideas. All other issues were considered secondary.
(2) To achieve (1), the program committee agreed that one of its
principal roles was to help eliminate bias in the reviewing process.
This took several forms: (a) Close attention was paid to cases in
which the reviewers disagreed among themselves. In such cases the
area chair often read the paper him/herself to help come to a decision.
(b) The area chairs studied histograms of scores to help identify
cases where reviewers seemed to be using different scales. (c) The
committee tried to identify reviewers who were not as strong or as
devoted as others and tried to weight their reviews accordingly.
(3) It was agreed that authors who were members of the program
committee would be held to higher standards than other authors.
That is, if a paper by a program committee author was near a borderline
(acceptance, spotlight, oral), it would be demoted. This was
considered to be another form of bias minimization, given that
the committee was aware that some reviewers might favor program
committee members. Also, program committee members who were authors
of a paper left the room when their paper was being discussed;
they played no role in the decision-making process for their own
papers.
(4) Other criteria that were utilized in the decision-making process
included: junior status of authors (younger authors were favored),
new-to-NIPS criteria (outsiders were favored), novelty (new ideas
were favored). These criteria also figured in decisions for oral
presentations and spotlights, along with additional criteria that
favored authors who had not had an oral presentation in recent years
and favored presentations of general interest to the NIPS audience.
All such criteria, however, were considered secondary, in that they
were used to distinguish papers that were gauged to be of roughly
equal quality by the reviewers. As stated above, the primary criterion
was to select the best papers, and to give oral presentations to
papers receiving the most laudatory reviews.
(5) Generally speaking, it turned out that the program committee
decisions followed the reviewers' scores. A rough guess would be
that 1 paper in 10 was moved up or down from where the reviewers'
scores placed the paper.
(6) The entire program committee participated in the discussions
of individual papers for all of the areas.
(7) The decision making was seldom easy.
It was the overall sense of the program committee that the submissions
were exceptionally strong this year. There were many papers near
the borderline that were of NIPS quality, but could not be accepted
because of size constraints (the conference is limited in size by a
number of factors, including the scheduling and the size of the
proceedings volume). We hope that authors of these papers will
strengthen them a notch and resubmit next year.
The process was as fair and as intellectually rigorous as the
program committee could make it. It can of course stand improvement,
however, and I would hope that people with ideas in this regard
will attend the feedback session in Denver. One improvement that
I personally think is worth considering, having now seen the reviewing
process in such detail, is to allow reviewers to consult among
themselves. In this model, reviewers exchange their reviews and
discuss them before sending final reviews to the program chair.
I review for other conferences where this is done, and I think that
it has the substantial advantage of helping to reduce cases where
a reviewer just didn't understand something and thus gave a paper
an unreasonably low score. Such is my opinion at any case. Perhaps
this idea and other such ideas could be discussed in Denver.
Mike Jordan
-------------------------------------------------------------------
Reviewers for NIPS*96:
---------------------
Larry Abbott David Lowe
Naoki Abe David Madigan
Subutai Ahmad Marina Meila
Ethem Alpaydin Bartlett Mel
Chuck Anderson David Miller
James Anderson Kenneth Miller
Chris Atkeson Martin Moller
Pierre Baldi Read Montague
Naama Barkai Tony Movshon
Etienne Barnard Klaus Mueller
Andy Barto Alan Murray
Francoise Beaufays Ian Nabney
Sue Becker Jean-Pierre Nadal
Yoshua Bengio Ken Nakayama
Michael Biehl Ralph Neuneier
Leon Bottou Mahesan Niranjan
Herve Bourlard Peter Norvig
Timothy Brown Klaus Obermayer
Nader Bshouty Erkki Oja
Joachim Buhmann Genevieve Orr
Carmen Canavier Art Owen
Claire Cardie Barak Pearlmutter
Ted Carnevale Jing Peng
Nestor Caticha Fernando Pereira
Gert Cauwenberghs Pietro Perona
David Cohn Carsten Peterson
Greg Cooper Jay Pittman
Corinna Cortes Tony Plate
Gary Cottrell John Platt
Marie Cottrell Jordan Pollack
Bob Crites Alexandre Pouget
Christian Darken Jose Principe
Peter Dayan Adam Prugel-Bennett
Virginia de Sa Anand Rangarajan
Alain Destexhe Carl Rasmussen
Thomas Dietterich Steve Renals
Dawei Dong Barry Richmond
Charles Elkan Peter Riegler
Ralph Etienne-Cummings Brian Ripley
Gary Flake David Rohwer
Paolo Frasconi David Saad
Bill Freeman Philip Sabes
Yoav Freund Lawrence Saul
Jerry Friedman Stefan Schaal
Patrick Gallinari Jeff Schneider
Stuart Geman Terrence Sejnowski
Zoubin Ghahramani Robert Shapley
Federico Girosi Patrice Simard
Mirta Gordon Tai Sing
Russ Greiner Yoram Singer
Vijaykumar Gullapalli Satinder Singh
Isabelle Guyon Padhraic Smyth
Lars Hansen Bill Softky
John Harris David Somers
Michael Hasselmo Devika Subramanian
Simon Haykin Richard Sutton
David Heckerman Josh Tenenbaum
John Hertz Michael Thielscher
Andreas Herz Sebastian Thrun
Tom Heskes Mike Titterington
Geoffrey Hinton Geoffrey Towell
Sean Holden Todd Troyer
Don Hush Ah Chung Tsoi
Nathan Intrator Michael Turmon
Tommi Jaakkola Joachim Utans
Marwan Jabri Benjamin VanRoy
Jeff Jackson Kelvin Wagner
Robbie Jacobs Raymond Watrous
Chuanyi Ji Yair Weiss
Ido Kanter Christopher Williams
Bert Kappen Ronald Williams
Dan Kersten Robert Williamson
Ronny Kohavi David Willshaw
Alan Lapedes Ole Winther
John Lazzaro David Wolpert
Todd Leen Lei Xu
Zhaoping Li Alan Yuille
Christiane Linster Tony Zador
Richard Lippmann Steven Zucker
Michael Littman
More information about the Connectionists
mailing list