No subject


Tue Jun 6 06:52:25 EDT 2006


finally come around to the crux of the biggest problem with the present
reviewing system: not enough personal incentive for reviewers to do a good
job (or to do it quickly for that matter).

While the discussion started out being mostly about speed to publication, the
issue of review quality has persistently resurfaced. And while many people
have proposed some sort of free-market solution to letting papers compete,
what I think would be more helpful would be turning some economic and
motivational scrutiny on the reviews themselves. Reviewing is hard, and it
should be rewarded, and the reward should ideally be somewhat proportional to
quality. Right now the reward is mostly altruism and personal pride in doing
good work. There is a little bit of reputation involved in that the editors
and sometimes the other reviewers see the results and can attach a name to
them, but this is a weak reward signal because of how narrow the audience is.

The economic currency of academia is reputation. (There was a short column or
article about this somewhere, maybe Science, but I don't remember.) The
major motivation for doing good papers in the first place is the affect it
has on your reputation. (These papers are part of your "professional voice"
as Phil Agre's Networking on the Network document puts it.) This in turn
affects funding, job hunting, tenure decisions, etc. so there is plenty of
motivation to do it well. It would be nice if there were to create a stronger
incentive (reward signal) for review quality.

This is not too absurd as it seems only a slight jump away from similar
standard practices. Part of a review is quality assesment, but tied in with
that is advice on how to improve the work. Advice in some other contexts is
amply rewarded in reputational currency. Advisors are partly judged by the
accomplishments of students that they have advised. People who give advice on
how to improve a paper are often mentioned in an acknowledgements section.
Often the job they do is very similar to that of a reviewer, it just isn't
coordinated by an editor. Sometimes such people become co-authors and then
they get the full benefit of reputational reward for their efforts. Even
anonymous reviewers are thanked in acknowledgements sections though their
reputations are not aided by this. Sometimes the line between the
contributions of a reviewer and an author are somewhat blurry. Many people
probably know of examples where a particularly helpful anonymous reviewer
contributed more to a paper than someone who was, due to some obligation,
listed as a coauthor. But many reviews are quite unhelpful or are way off on
the quality assessment. It would improve the quality more consitently if the
reviewer got some academic reputational currency out of doing good reviews
(and corresponding potential to look foolish for being very wrong). How best
to change the structure of the reviewing system to accomplish this is an open
question.

Someone mentioned a journal where reviews are published with the articles.
This has some benefit, but has some problems. Reviews for articles that are
completely rejected are not published. We don't want people to only agree to
review articles they think will get published. Also, while publishing reviews
gives a little incentive not to screw up, to fully motivate quality such
things would have to become regularly scrutinized in tenure and job decisions
as an integral part of the overall publication record. But the field would have
to be careful to separate out the quality of the review from the quality and
fame of the reviewed material itself, again to not encourage jockeying to
review only the papers that look to be the most influential.

Clearly I don't have all the answers, but I advocate looking at the problem
in terms of economic incentives, in the same way that economists look at
other incentive systems such as incentive stock options for corporate
employees, which serve a useful purpose but have well-understood drawbacks
from an incentive perspective.


Note that review quality is a somewhat separate issue than the also important
filtering and attention selection issue, such as the software that Geoff
Hinton requested. Even a perfect personalized selection mechanism would not
completely replace the benefits of a reviewing system. For example, reviews
still help authors to improve their work, and thereby the entire field. And
realistically no such perfect selection mechanism will ever exist, so selection
will always be greatly aided by quality improvement and filtering at the
source side. Thus we should be interested in structural mechanisms to improve
the quality of reviews (as well as in useful selection mechanisms to tell us
what to read).

-Karl

-------------------------------------------------------------------------------
Karl Pfleger  kpfleger at cs.stanford.edu  www-cs-students.stanford.edu/~kpfleger/
-------------------------------------------------------------------------------


> From: Bob Damper <rid at ecs.soton.ac.uk>
> 
> This shortage of good qualified referees is going to continue all the
> time there is no tangible reward (other than a warm altruistic feeling)
> for the onerous task of reviewing.  So, as many others have pointed
> out, parallel submissions will exacerbate this situation rather than
> improve it.  Not a good idea!
> 
> Bob.
> 
> On Tue, 27 Nov 2001, rinkus wrote:
> > 
> > In many instances a particular student may have particular knowledge and
> > insight relevant to a particular submission but the proper model here is
> > for the advertised reviewer (i.e., whose name appears on the editorial
> > board of the publication) to consult with the student about the
> > submission (and this should probably be in an indirect fashion so as to
> > protect the author's identity and ideas) and then write the review from
> > scratch himself. The scientific review process is undoubtedly worse off
> > to the extent this kind of accountability is not ensured. We end up
> > seeing far too much rehashing of old ideas and not enough new ideas.
> > 
> > Rod Rinkus
> > 




More information about the Connectionists mailing list