reviewer comments about cognitive modeling

Chipman, Susan CHIPMAS at ONR.NAVY.MIL
Mon Jul 10 13:49:40 EDT 2000


    CS reviewers may have excessive confidence in the effectiveness of
machine learning.  One of the more interesting things to emerge from the ONR
"Hybrid Learning" program was that human learning of the so-called mine
evasion task, which had originally been built for work with genetic
algorithms at NRL, was so fast that the machine learning folks were blown
away.  Over a period of several years, machine learning approaches modeled
on the way that humans seemed to be learning the task gradually came to
approximate the speed of human learning.

Susan F. Chipman, Ph.D.
Office of Naval Research, Code 342
800 N. Quincy Street
Arlington, VA 22217-5660
phone:  703-696-4318
Fax:  703-696-1212


-----Original Message-----
From: Stephanie Doane [mailto:sdoane at wildthing.psychology.msstate.edu]
Sent: Monday, July 10, 2000 10:22 AM
To: Lynne Reder
Cc: Christian Schunn; act-r-users+ at andrew.cmu.edu;
gdell at s.psych.uiuc.edu; crosby at hawaii.edu
Subject: Re: reviewer comments about cognitive modeling


Chris & Lynn,

  I believe Gary Dell has an interesting response to the Roberts and
Pashler paper.

Chris,

        Your article sounds interesting! You may want to contact Martha
Crosby at crosby at hawaii.edu.  She and Chin are editing a special edition of
UMUAI on "Empirical Evaluations of User Models" and they may be tracking
down similar literature.

        The comments to my modeling submissions have addressed both
theoretical and pragmatic issues. Here are two representative  reviewer
comments I just pulled from my files
        98 submission  - "General computational cognitive models of
operator behavior.....have been for a long time been plagued by rather ad
hoc solutions to how knowledge is activated depending on the context."
        99 submission - "...it seems to me to leave several questions
unanswered, as do all the other AI programs. None of them deal with
perceptual motor aspects of high-bandwith tasks, not SOAR, not ACT-R, etc.,
[....]   In other words, like so many AI groups, the authors have reduced
behavior to cognition, which it is not."


        Sometimes negative comments arise from confusions about the goals
of cogntiive modeling. I find submissions to CS-oriented journals usually
result in at least one review stating something like "a [machine learning
algorithm] would be much better."  I think this type of comment results
from confusing the goals of modeling human cognition and the goal of
developing intelligent algorithms that don't mimic human thinking.

        Submissions to  empirically oriented psychological journals usually
result in one "so what" comment - and the best way to describe my take on
this comment is that the "magic" is gone once the reviewer sees the model
layed out in detail.  It is as if once the "mystery" is gone and the
mechanics of the model are on view for all to see that the worth of the
model is diminished. (Every read the book "Zen and the Art of Motorcycle
Maintenance?  I think these comments come from folks that just want to ride
the bike - please ignore if you haven't read the book).

        The comment I suppose I hear most is that descriptive models are
essentially "religion" and that "anyone can make a model do anything they
want it to do."  Models that descriptively match group-level performance
are most vulnerable to this criticism. My approach to this criticism has
been to test the predictive validity of my models.  I have a few references
to published complaints  (e.g., Dreyfus) about computational cognitive
modeling in recent Cog Sci and  User Modeling and User Adapted Interaction
articles (both are in the first issues of 2000).

Stephanie Doane


At 7:16 AM -0500 7/8/00, Lynne Reder wrote:
>Chris,
>
>Have you seen the recent paper (just published) in Psych Review by
>Roberts & Pashler?
>That paper is definitely anti-modeling and is a perfect starting
>point of issues to
>address.  Much of what they say is reasonable--it is only the invited
>inference
>that is not, viz., that it is better not to model than to model.
>
>--Lynne
>
>At 10:56 AM +0200 7/8/00, Christian Schunn wrote:
>>Dieter Wallach and I are writing a paper addressing common
>>complaints about computational cognitive modeling (theoretical and
>>pragmatic). We would greatly appreciate any anecdotes or quotes that
>>you could provide from journal or conference reviews on this topic
>>(to demonstrate that these complaints exist in the world rather than
>>just in our heads and to document the relative frequency of the
>>complaint types). Obviously, the reviewer's (or editor's) identity
>>would have to be kept anonymous, but if you could identify the
>>journal (or conference) or at least the type of journal (psych, cog
>>sci, comp sci, etc), that would be great. To make things easier on
>>you, you can send us whole reviews, and we can find the relevant
>>bits.
>>
>>Thanks in advance,
>>
>>-Chris
>>-----------------------------
>>Christian Schunn
>>Assistant Professor of Psychology
>>George Mason University
>>
>>Currently visiting the University of Basel in Switzerland
>>
>>Best contact method: schunn at gmu.edu
>>Web: www.hfac.gmu.edu/~schunn
>>-----------------------------



Stephanie Doane, Ph.D.
Associate Professor
Department of Psychology
Box 6161
Mississippi State University
Mississippi State, MS 39762
(v) 662-325-4718
(f) 662-325-7212
sdoane at wildthing.psychology.msstate.edu
http://wildthing.psychology.msstate.edu



More information about the ACT-R-users mailing list