From rijn at swi.psy.uva.nl Wed Jul 5 06:35:22 2000 From: rijn at swi.psy.uva.nl (Hedderik van Rijn) Date: Wed, 5 Jul 2000 12:35:22 +0200 (CEST) Subject: Code to plot activation growth & Mental lexicon/Lexical decision questions Message-ID: In the last couple of weeks, I've been investigating how to implement a large mental lexicon in ACT-R. (Consisting of all 4 letter words the CELEX (http://www.kun.nl/celex/) English orthographic word-forms database.) ** Code to plot activation growth However, during tests with the model that uses this lexicon, I discovered to my shame that I my knowledge of how the activation of chunks is calculated/approximated fell short in certain important areas. Because just reading formulas doesn't give me "feeling" for fine nuances, I wrote some R/S code to plot figures like Figure 4.1 in the Atomic Components of Thought book. For those interested, it is available at: http://swipc30.swi.psy.uva.nl/~rijn/actr-activations/ (Evaluate fig4.1() and fig4.1.2() to get plots like Fig4.1) After playing around, a lot of the issues involving activation became a lot clearer to me. However, there is one issue still unsolved. ** Approximated activation with d=.9 is too high? If the decay (d) is set to .5, like in Fig 4.1 of the book, the "real base-level activation equation" is closely approximated by the "optimized learning base-level activation equation". (As is argued at p.124 of the book.) However, if d is set to a very high value, for example .9, the approximation equation seems to yield structurally higher base-level activations than the real equation. This is illustrated in the two plots shown on the same web page as above. (I didn't attach these graphs not clutter the mailing list with binary information.) Can someone shed some light on this issue? Is the approximation function indeed "better" for values of d close to .5? Or is there a bug somewhere in my interpretation/code? ** Mental Lexicon/Lexical Decision task questions The project that was causing this all, is an attempt to model lexical decision data. I searched around a bit, but did not find any previous ACT-R (or related) modeling for this task. However, if someone can point me to relevant information, I would be very pleased. Alternatively, less specifically, did someone already try to model a large mental lexicon (current model has +- 2500 word entries) in ACT-R? I would like to discuss some issues involving, for example, representations or activations of low frequent words. Same as for the previous question, pointers to relevant modeling literature would be very welcome. - Hedderik. From pirolli at parc.xerox.com Wed Jul 5 12:00:31 2000 From: pirolli at parc.xerox.com (Peter Pirolli) Date: Wed, 5 Jul 2000 09:00:31 PDT Subject: Code to plot activation growth & Mental lexicon/Lexical decision questions Message-ID: I've used one of the large word indexes from the TREC text retrieval workshop to compute base strengths and inter-chunk association strengths for works. In principle, this provides a lexicon of 200 million lexical items and 55 million association links. Many of these items, however, have little practical use, since they are things like dates, numeric codes, etc. For a specific simulation I construct a network from the subset of items that I think are needed for the model. In practice, this has ranged from several hundred to tens of thousands of nodes. I do not, however, use the ACT-R activation mechanism, so I can't comment on specific problems in that regard. --Pete At 03:35 AM 7/5/00 -0700, Hedderik van Rijn wrote: >In the last couple of weeks, I've been investigating how to implement a >large mental lexicon in ACT-R. (Consisting of all 4 letter words the CELEX >(http://www.kun.nl/celex/) English orthographic word-forms database.) > >** Code to plot activation growth > >However, during tests with the model that uses this lexicon, I discovered to >my shame that I my knowledge of how the activation of chunks is >calculated/approximated fell short in certain important areas. Because just >reading formulas doesn't give me "feeling" for fine nuances, I wrote some >R/S code to plot figures like Figure 4.1 in the Atomic Components of Thought >book. For those interested, it is available at: > > http://swipc30.swi.psy.uva.nl/~rijn/actr-activations/ > >(Evaluate fig4.1() and fig4.1.2() to get plots like Fig4.1) > >After playing around, a lot of the issues involving activation became a lot >clearer to me. However, there is one issue still unsolved. > >** Approximated activation with d=.9 is too high? > >If the decay (d) is set to .5, like in Fig 4.1 of the book, the "real >base-level activation equation" is closely approximated by the "optimized >learning base-level activation equation". (As is argued at p.124 of the >book.) However, if d is set to a very high value, for example .9, the >approximation equation seems to yield structurally higher base-level >activations than the real equation. This is illustrated in the two plots >shown on the same web page as above. (I didn't attach these graphs not >clutter the mailing list with binary information.) Can someone shed some >light on this issue? Is the approximation function indeed "better" for >values of d close to .5? Or is there a bug somewhere in my >interpretation/code? > >** Mental Lexicon/Lexical Decision task questions > >The project that was causing this all, is an attempt to model lexical >decision data. I searched around a bit, but did not find any previous >ACT-R (or related) modeling for this task. However, if someone can point >me to relevant information, I would be very pleased. > >Alternatively, less specifically, did someone already try to model a large >mental lexicon (current model has +- 2500 word entries) in ACT-R? I would >like to discuss some issues involving, for example, representations or >activations of low frequent words. Same as for the previous question, >pointers to relevant modeling literature would be very welcome. > > - Hedderik. From schunn at gmu.edu Sat Jul 8 04:56:53 2000 From: schunn at gmu.edu (Christian Schunn) Date: Sat, 08 Jul 2000 10:56:53 +0200 Subject: reviewer comments about cognitive modeling Message-ID: Dieter Wallach and I are writing a paper addressing common complaints about computational cognitive modeling (theoretical and pragmatic). We would greatly appreciate any anecdotes or quotes that you could provide from journal or conference reviews on this topic (to demonstrate that these complaints exist in the world rather than just in our heads and to document the relative frequency of the complaint types). Obviously, the reviewer's (or editor's) identity would have to be kept anonymous, but if you could identify the journal (or conference) or at least the type of journal (psych, cog sci, comp sci, etc), that would be great. To make things easier on you, you can send us whole reviews, and we can find the relevant bits. Thanks in advance, -Chris ----------------------------- Christian Schunn Assistant Professor of Psychology George Mason University Currently visiting the University of Basel in Switzerland Best contact method: schunn at gmu.edu Web: www.hfac.gmu.edu/~schunn ----------------------------- From reder at andrew.cmu.edu Sat Jul 8 08:16:23 2000 From: reder at andrew.cmu.edu (Lynne Reder) Date: Sat, 8 Jul 2000 07:16:23 -0500 Subject: reviewer comments about cognitive modeling Message-ID: Chris, Have you seen the recent paper (just published) in Psych Review by Roberts & Pashler? That paper is definitely anti-modeling and is a perfect starting point of issues to address. Much of what they say is reasonable--it is only the invited inference that is not, viz., that it is better not to model than to model. --Lynne At 10:56 AM +0200 7/8/00, Christian Schunn wrote: >Dieter Wallach and I are writing a paper addressing common >complaints about computational cognitive modeling (theoretical and >pragmatic). We would greatly appreciate any anecdotes or quotes that >you could provide from journal or conference reviews on this topic >(to demonstrate that these complaints exist in the world rather than >just in our heads and to document the relative frequency of the >complaint types). Obviously, the reviewer's (or editor's) identity >would have to be kept anonymous, but if you could identify the >journal (or conference) or at least the type of journal (psych, cog >sci, comp sci, etc), that would be great. To make things easier on >you, you can send us whole reviews, and we can find the relevant >bits. > >Thanks in advance, > >-Chris >----------------------------- >Christian Schunn >Assistant Professor of Psychology >George Mason University > >Currently visiting the University of Basel in Switzerland > >Best contact method: schunn at gmu.edu >Web: www.hfac.gmu.edu/~schunn >----------------------------- From sdoane at wildthing.psychology.msstate.edu Mon Jul 10 10:22:22 2000 From: sdoane at wildthing.psychology.msstate.edu (Stephanie Doane) Date: Mon, 10 Jul 2000 10:22:22 -0400 Subject: reviewer comments about cognitive modeling Message-ID: Chris & Lynn, I believe Gary Dell has an interesting response to the Roberts and Pashler paper. Chris, Your article sounds interesting! You may want to contact Martha Crosby at crosby at hawaii.edu. She and Chin are editing a special edition of UMUAI on "Empirical Evaluations of User Models" and they may be tracking down similar literature. The comments to my modeling submissions have addressed both theoretical and pragmatic issues. Here are two representative reviewer comments I just pulled from my files 98 submission - "General computational cognitive models of operator behavior.....have been for a long time been plagued by rather ad hoc solutions to how knowledge is activated depending on the context." 99 submission - "...it seems to me to leave several questions unanswered, as do all the other AI programs. None of them deal with perceptual motor aspects of high-bandwith tasks, not SOAR, not ACT-R, etc., [....] In other words, like so many AI groups, the authors have reduced behavior to cognition, which it is not." Sometimes negative comments arise from confusions about the goals of cogntiive modeling. I find submissions to CS-oriented journals usually result in at least one review stating something like "a [machine learning algorithm] would be much better." I think this type of comment results from confusing the goals of modeling human cognition and the goal of developing intelligent algorithms that don't mimic human thinking. Submissions to empirically oriented psychological journals usually result in one "so what" comment - and the best way to describe my take on this comment is that the "magic" is gone once the reviewer sees the model layed out in detail. It is as if once the "mystery" is gone and the mechanics of the model are on view for all to see that the worth of the model is diminished. (Every read the book "Zen and the Art of Motorcycle Maintenance? I think these comments come from folks that just want to ride the bike - please ignore if you haven't read the book). The comment I suppose I hear most is that descriptive models are essentially "religion" and that "anyone can make a model do anything they want it to do." Models that descriptively match group-level performance are most vulnerable to this criticism. My approach to this criticism has been to test the predictive validity of my models. I have a few references to published complaints (e.g., Dreyfus) about computational cognitive modeling in recent Cog Sci and User Modeling and User Adapted Interaction articles (both are in the first issues of 2000). Stephanie Doane At 7:16 AM -0500 7/8/00, Lynne Reder wrote: >Chris, > >Have you seen the recent paper (just published) in Psych Review by >Roberts & Pashler? >That paper is definitely anti-modeling and is a perfect starting >point of issues to >address. Much of what they say is reasonable--it is only the invited >inference >that is not, viz., that it is better not to model than to model. > >--Lynne > >At 10:56 AM +0200 7/8/00, Christian Schunn wrote: >>Dieter Wallach and I are writing a paper addressing common >>complaints about computational cognitive modeling (theoretical and >>pragmatic). We would greatly appreciate any anecdotes or quotes that >>you could provide from journal or conference reviews on this topic >>(to demonstrate that these complaints exist in the world rather than >>just in our heads and to document the relative frequency of the >>complaint types). Obviously, the reviewer's (or editor's) identity >>would have to be kept anonymous, but if you could identify the >>journal (or conference) or at least the type of journal (psych, cog >>sci, comp sci, etc), that would be great. To make things easier on >>you, you can send us whole reviews, and we can find the relevant >>bits. >> >>Thanks in advance, >> >>-Chris >>----------------------------- >>Christian Schunn >>Assistant Professor of Psychology >>George Mason University >> >>Currently visiting the University of Basel in Switzerland >> >>Best contact method: schunn at gmu.edu >>Web: www.hfac.gmu.edu/~schunn >>----------------------------- Stephanie Doane, Ph.D. Associate Professor Department of Psychology Box 6161 Mississippi State University Mississippi State, MS 39762 (v) 662-325-4718 (f) 662-325-7212 sdoane at wildthing.psychology.msstate.edu http://wildthing.psychology.msstate.edu From CHIPMAS at ONR.NAVY.MIL Mon Jul 10 13:49:40 2000 From: CHIPMAS at ONR.NAVY.MIL (Chipman, Susan) Date: Mon, 10 Jul 2000 13:49:40 -0400 Subject: reviewer comments about cognitive modeling Message-ID: CS reviewers may have excessive confidence in the effectiveness of machine learning. One of the more interesting things to emerge from the ONR "Hybrid Learning" program was that human learning of the so-called mine evasion task, which had originally been built for work with genetic algorithms at NRL, was so fast that the machine learning folks were blown away. Over a period of several years, machine learning approaches modeled on the way that humans seemed to be learning the task gradually came to approximate the speed of human learning. Susan F. Chipman, Ph.D. Office of Naval Research, Code 342 800 N. Quincy Street Arlington, VA 22217-5660 phone: 703-696-4318 Fax: 703-696-1212 -----Original Message----- From: Stephanie Doane [mailto:sdoane at wildthing.psychology.msstate.edu] Sent: Monday, July 10, 2000 10:22 AM To: Lynne Reder Cc: Christian Schunn; act-r-users+ at andrew.cmu.edu; gdell at s.psych.uiuc.edu; crosby at hawaii.edu Subject: Re: reviewer comments about cognitive modeling Chris & Lynn, I believe Gary Dell has an interesting response to the Roberts and Pashler paper. Chris, Your article sounds interesting! You may want to contact Martha Crosby at crosby at hawaii.edu. She and Chin are editing a special edition of UMUAI on "Empirical Evaluations of User Models" and they may be tracking down similar literature. The comments to my modeling submissions have addressed both theoretical and pragmatic issues. Here are two representative reviewer comments I just pulled from my files 98 submission - "General computational cognitive models of operator behavior.....have been for a long time been plagued by rather ad hoc solutions to how knowledge is activated depending on the context." 99 submission - "...it seems to me to leave several questions unanswered, as do all the other AI programs. None of them deal with perceptual motor aspects of high-bandwith tasks, not SOAR, not ACT-R, etc., [....] In other words, like so many AI groups, the authors have reduced behavior to cognition, which it is not." Sometimes negative comments arise from confusions about the goals of cogntiive modeling. I find submissions to CS-oriented journals usually result in at least one review stating something like "a [machine learning algorithm] would be much better." I think this type of comment results from confusing the goals of modeling human cognition and the goal of developing intelligent algorithms that don't mimic human thinking. Submissions to empirically oriented psychological journals usually result in one "so what" comment - and the best way to describe my take on this comment is that the "magic" is gone once the reviewer sees the model layed out in detail. It is as if once the "mystery" is gone and the mechanics of the model are on view for all to see that the worth of the model is diminished. (Every read the book "Zen and the Art of Motorcycle Maintenance? I think these comments come from folks that just want to ride the bike - please ignore if you haven't read the book). The comment I suppose I hear most is that descriptive models are essentially "religion" and that "anyone can make a model do anything they want it to do." Models that descriptively match group-level performance are most vulnerable to this criticism. My approach to this criticism has been to test the predictive validity of my models. I have a few references to published complaints (e.g., Dreyfus) about computational cognitive modeling in recent Cog Sci and User Modeling and User Adapted Interaction articles (both are in the first issues of 2000). Stephanie Doane At 7:16 AM -0500 7/8/00, Lynne Reder wrote: >Chris, > >Have you seen the recent paper (just published) in Psych Review by >Roberts & Pashler? >That paper is definitely anti-modeling and is a perfect starting >point of issues to >address. Much of what they say is reasonable--it is only the invited >inference >that is not, viz., that it is better not to model than to model. > >--Lynne > >At 10:56 AM +0200 7/8/00, Christian Schunn wrote: >>Dieter Wallach and I are writing a paper addressing common >>complaints about computational cognitive modeling (theoretical and >>pragmatic). We would greatly appreciate any anecdotes or quotes that >>you could provide from journal or conference reviews on this topic >>(to demonstrate that these complaints exist in the world rather than >>just in our heads and to document the relative frequency of the >>complaint types). Obviously, the reviewer's (or editor's) identity >>would have to be kept anonymous, but if you could identify the >>journal (or conference) or at least the type of journal (psych, cog >>sci, comp sci, etc), that would be great. To make things easier on >>you, you can send us whole reviews, and we can find the relevant >>bits. >> >>Thanks in advance, >> >>-Chris >>----------------------------- >>Christian Schunn >>Assistant Professor of Psychology >>George Mason University >> >>Currently visiting the University of Basel in Switzerland >> >>Best contact method: schunn at gmu.edu >>Web: www.hfac.gmu.edu/~schunn >>----------------------------- Stephanie Doane, Ph.D. Associate Professor Department of Psychology Box 6161 Mississippi State University Mississippi State, MS 39762 (v) 662-325-4718 (f) 662-325-7212 sdoane at wildthing.psychology.msstate.edu http://wildthing.psychology.msstate.edu From wschoppe at gmu.edu Mon Jul 10 15:16:00 2000 From: wschoppe at gmu.edu (Wolfgang Schoppek) Date: Mon, 10 Jul 2000 15:16:00 -0400 Subject: reviewer comments about cognitive modeling Message-ID: Chris and Dieter, here are two reviews on conference papers about two different models (the second ones were both positive, so I didn't cite them here - or do you want to hear models being praised, too?) ICCM-2000: The basic logic of this model is identical to a well-known ACT-R model of sequence learning (Lebiere & Wallach). Moreover, it makes many adhoc, questionable assumptions, the most egregious being the assumption that fragmentary scans are present in the learning phase so they can be demonstrated later. ... CogSci 2000: This paper makes a good contribution to modeling memory for continually changing information. The author also provides good empirical verification. It is not clear just how much more explanation is provided by this paper as compared to the paper by Venturino, although the present author does claim parsimony. Comment on this: Venturino had speculated about two types of memory, one for static, and one for dynamic information. The model demonstrates that standard memory assumptions (together with a probabilistic task analysis) are sufficient to explain the data. -- Wolfgang -------------------------------------------------------------------- Dr. Wolfgang Schoppek <<< Tel.: +1 703-993-4663 <<< HUMAN FACTORS & APPLIED COGNITION PROGRAM, George Mason University, Fairfax, VA 22030-4444 http://www.uni-bayreuth.de/departments/psychologie/wolfgang.htm -------------------------------------------------------------------- From klahr+ at andrew.cmu.edu Wed Jul 12 09:37:48 2000 From: klahr+ at andrew.cmu.edu (David Klahr) Date: Wed, 12 Jul 2000 09:37:48 -0400 (EDT) Subject: reviewer comments about cognitive modeling Message-ID: Hi Chris, I don't have any reviewer comments to add, but I have addressed some of the anti-computational criticisms in a couple of papers. The most recent one is Klahr, D., & MacWhinney, B. (1998) Information Processing. In D. Kuhn & R. S. Siegler (Eds.), W. Damon (Series Ed.). Handbook of child psychology (5th ed.): Vol. 2: Cognition, perception, and language. New York: Wiley. In there we address both production systems and PDP approaches to modelling. An earlier paper, with a broader treatment of "information processing approaches" is Klahr, D. (1992). Information processing approaches to cognitive development. In M. H. Bornstein & M. E. Lamb (Eds.), Developmental Psychology: An Advanced Textbook, 3rd Edition. ( pp. 273-336)Hillsdale, N.J.: Erlbaum. I know you are familiar with both papers (you better be!), and I'm not sure what kind of argument you are trying to muster here, but I've attached a section from the Klahr & MacWhinney(98) paper that might be useful. dk From Bruno.Emond at nrc.ca Wed Jul 12 09:58:09 2000 From: Bruno.Emond at nrc.ca (Bruno Emond) Date: Wed, 12 Jul 2000 09:58:09 -0400 Subject: reviewer comments about cognitive modeling Message-ID: Chris, This is the worst comment I have ever received regarding the use of cognitive modelling as a research tool and of ACT-R in particular. That comment was sent to me as an argument to reject a paper submitted to cognitive science 99 in Vancouver. - ACT-R relies on the idea that the mind is a rule system, an idea that is rather indefensible; Bruno. on 08/07/00 04:56, Christian Schunn at schunn at gmu.edu wrote: > Dieter Wallach and I are writing a paper addressing common complaints > about computational cognitive modeling (theoretical and pragmatic). > We would greatly appreciate any anecdotes or quotes that you could > provide from journal or conference reviews on this topic (to > demonstrate that these complaints exist in the world rather than just > in our heads and to document the relative frequency of the complaint > types). Obviously, the reviewer's (or editor's) identity would have > to be kept anonymous, but if you could identify the journal (or > conference) or at least the type of journal (psych, cog sci, comp > sci, etc), that would be great. To make things easier on you, you can > send us whole reviews, and we can find the relevant bits. > > Thanks in advance, > > -Chris > ----------------------------- > Christian Schunn > Assistant Professor of Psychology > George Mason University > > Currently visiting the University of Basel in Switzerland > > Best contact method: schunn at gmu.edu > Web: www.hfac.gmu.edu/~schunn > ----------------------------- > -- From ema at msu.edu Wed Jul 12 16:15:28 2000 From: ema at msu.edu (Erik M. Altmann) Date: Wed, 12 Jul 2000 15:15:28 -0500 Subject: reviewer comments about cognitive modeling Message-ID: I'm enjoying the anecdotes, so am forwarding mine to the list. Among recent attacks in the literature, there is the Cooper and Shallice article in Cognition (1995, v. 55, pp. 115-149), "Soar and the case for unified theories of cognition". Their argument is roughly that computational theories contain too many assumptions to test them all [so we should forgo precision altogether]. Apparently Nelson Cowan, in his book, also cites the need to make assumptions as a reason to avoid that kind of research (I'd check, but my copy is packed...). Erik. -- From reviews of "Functional decay in serial attention", submitted to JEP: HPP. Action editor: "On the one hand, you have generated an empirical pattern that occurs regularly, in your studies, so long as bivalent stimuli were used (RT increases with increasing trials using the same rule, after the first trial using it) and you have generated a theoretical framework that appears to accommodate this pattern. On the other hand, your memory decay model was not regarded as plausible and your empirical pattern of increasing RTs was not regarded as typical for this literature. With regard to the first general criticism, I'm afraid I don't have specific advise for you on how to make your model more appealing." Reviewer B vacillates between dismissing a model he seems not to understand very well, and toying with its implications: "Another big problem is that the decay hypothesis has the look and feel of a post-hoc attempt to account for the data. The theoretical justification is quite weak and doesn't make sense. It isn't clear why there is an appeal to memory retrieval. If the productions [sic] are already loaded into working memory, then why reload them again? And why would it be more difficult to reload them with repetition? Also, wouldn't the decay hypothesis suggest that there would be minimal switching cost at P1? What would the decay hypothesis predict for a continually increasing pattern on no-switch blocks?" -- From reviews of "Preparing to forget: Memory and functional decay in serial attention", submitted to JEP: General. This was a slightly longer version of the paper above, and submitted first. Reviewer A comments on an algebraic model that depends on several assumptions to make predictions. He seems to find the model post hoc unless it can accommodate the predictions he reads into it: "I think that the 10 encoding cycles and 100 ms cycle time estimate made by the authors in Appendix A are very arbitrary and post hoc. They derive from the implicit assumption that subjects adjust the number of encoding cycles to the average trials in a sequence (10 in Exp. 1, 2, and 4). If this assumption is correct, instruction and first trial costs should vary with average sequence length. In Experiment 3, average sequence length was 4 trials; were instruction times and switch costs reduced accordingly? If not, the Appendix A assumptions are hard to accept." Reviewer B is more positive: "I found the model presented in Appendix A helpful. The model depends somewhat on the idea that the strength with which a new task set is imposed depends on the subjective anticipated run length. So it would seem to predict that for a run of fixed length, more errors should be expected in conditions where the subject has reason to expect runs of a shorter length. Although one can imagine other explanations for such a pattern, this would seem like a result that would be much easier to accommodate within the authors decay-based framework than within a more inertia-based account." -- From reviews of "The anatomy of serial attention: An integrated model of set shifting and maintenance", submitted to Psych. Science. The action editor, speaking for the reviewers, makes a fair point: "While all [three reviewers] agree that there is much of merit in the empirical work and in the modeling approach taken by the authors, they are also in strong agreement on the ways in which the present paper is not ready for publication. I won't summarize all of these points here, but some of the most important are (a) the opinion that the behavioral task described represents a somewhat restricted range of task switching, (b) a general dissatisfaction with the level of detail that is possible for data and models of this kind in a short report, (c) serious doubt that the most important of the new findings are observed generally in other task switching situations, and (d) a lack of attention to or misreading of some relevant antecedent literature." ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ Erik M. Altmann Department of Psychology Michigan State University East Lansing, MI 48824 517-353-4406 (voice) 703-993-1326 (voice, through July 15, 2000) 517-353-1652 (fax) ema at msu.edu http://www.msu.edu/~ema ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ From ritter at ist.psu.edu Sun Jul 16 12:22:15 2000 From: ritter at ist.psu.edu (Frank E. Ritter) Date: Sun, 16 Jul 2000 12:22:15 -0400 Subject: report on including behaviour moderators in Synthetic environments Message-ID: [this was written while traveling a while ago, and now that I'm back and I remember the email I'm sending it on.] Steps Towards Including Behavior Moderators in Human Performance Models in Synthetic Environments Frank E. Ritter and Marios N. Avraamides Technical Report No. ACS 2000-1 19 May 2000 When my web server is up (it's been pounded by electrical outages and I'm not around to look after it), this paper is available in http://ritter.ist.psu.edu/papers While it is not about Soar or ACT-R, it is about what could be put in Soar or ACT-R. Cheers, Frank Frank Ritter at ist.psu.edu School of Information Sciences and Technology The Pennsylvania State University University Park, PA 16801-3857 ph. (814) 865-4453 fax (814) 865-5604 http://ritter.ist.psu.edu From cl at andrew.cmu.edu Mon Jul 17 17:03:25 2000 From: cl at andrew.cmu.edu (Christian Lebiere) Date: Mon, 17 Jul 2000 17:03:25 -0400 Subject: 2000 ACT-R Workshop Message-ID: Final notice to act-r-users: the talk schedule will be finalized this Friday so you should register this week if you intend to make a presentation. SEVENTH ANNUAL ACT-R WORKSHOP ============================= Carnegie Mellon University - August 5-7 2000 ============================================ ACT-R is a cognitive theory and simulation system for developing cognitive models for tasks that vary from simple reaction time to air traffic control. The most recent advances of the ACT-R theory were detailed in the recent book "The Atomic Components of Thought" by John R. Anderson and Christian Lebiere, published in 1998 by Lawrence Erlbaum Associates. Each year, a workshop is held to present new developments and applications and to enable current users to exchange results and ideas. The Seventh Annual ACT-R Workshop will be held at Carnegie Mellon University in Pittsburgh from August 5 to 7, 2000. The early registration deadline is JULY 1. The invited speaker is Herbert A. Simon. The workshop will start at 9AM on Saturday August 5 in Adamson Wing, Baker Hall, and will conclude in the early afternoon of Monday August 7. The mornings of the workshop will be devoted to research presentations, each lasting about 20 minutes plus questions. Participants are invited to present their ACT-R research by submitting a one-page abstract with their registration by JULY 1. Papers submitted after that deadline will be accepted based on availability of slots. Informal contributions of up to 8 pages can be submitted by AUGUST 1 at the latest for inclusion in the workshop proceedings. The contributions can be in the form of a paper or presentation slides and must be submitted electronically to cl+ at cmu.edu. Saturday afternoon will feature the invited speaker, Herbert A. Simon, who will present the latest developments of the EPAM and CaMeRa architectures. Dr. Simon and John Anderson will discuss a range of issues related to cognitive architectures. Sunday afternoon will feature discussion sessions and instructional tutorials, including a session on the proposed changes in ACT-R 5.0. Suggestions for topics of discussion and tutorials are welcome. Lunch times will be occupied by demonstration sessions during which participants can gain a more detailed knowledge of the models presented and engage in unstructured discussions. Admission to the workshop is open to all. The early registration fee (before JULY 1) is $100 and the late registration fee (after JULY 1) is $125. Registration includes lunch on Saturday and Sunday, a dinner party on Saturday and a copy of the proceedings. A registration form is appended below. Specify the title of your talk (if applicable) and any suggestion for a session topic. Additional information, such as a detailed schedule, will appear on the ACT-R web site (http://act.psy.cmu.edu/) or can be requested at: 2000 ACT-R Summer School and Workshop Psychology Department Attn: Helen Borek Baker Hall 345C Fax: +1 (412) 268-2844 Carnegie Mellon University Tel: +1 (412) 268-3438 Pittsburgh, PA 15213-3890 Email: helen+ at cmu.edu ________________________________________________________ Seventh Annual ACT-R Workshop August 5 to August 7, 2000 at Carnegie Mellon University in Pittsburgh REGISTRATION ============ Name: .................................................................. Address: .................................................................. .................................................................. .................................................................. Tel/Fax: .................................................................. Email: .................................................................. Presentation topic (optional - include one-page abstract with registration): ......................................................................... Suggestion for topic of discussion or instructional tutorial (optional): ......................................................................... Registration fee: Before July 1: $100 ... After July 1: $125 ... The fee is due upon registration. Please send checks or money orders only. We cannot accept credit cards. HOUSING ======= Housing is available in Resnick House, a CMU dormitory that offers suite-style accommodations. Rooms include air-conditioning, a semi-private bathroom and a common living room for suite-mates. The rates are $33.75/night/person for single rooms and $25.00/night/person for double rooms. See http://www.housing.cmu.edu/conferences/ for further housing information. To reserve a room in Resnick House, fill in the dates and select one of the three room options: I will stay from ................ to ................ 1. ... I want a single room 2. ... I want a double room and I will room with ................ 3. ... I want a double room. Please select a roommate of ....... gender ROOM PAYMENT IS DUE UPON CHECK-IN. DO NOT SEND MONEY. The recommended hotel this year is the Wyndham Garden Hotel located on University Place, Oakland. This newly renovated hotel is in the heart of Oakland and offers free shuttle service (and easy walking distance) to CMU. A block of rooms has been reserved for the ACT-R Workshop until JULY 21. The CMU rate is $95 with a government rate of $79 per room with appropriate ID. All rates are per night plus tax. Please call the Wyndham direct asap to reserve your room at 412-683-2040 (fax 412-683-3934), stating that you are with the CMU-ACT-R Group to secure the special rate. Send this form to: 2000 ACT-R Summer School and Workshop Psychology Department Attn: Helen Borek Baker Hall 345C Fax: +1 (412) 268-2844 Carnegie Mellon University Tel: +1 (412) 268-3438 Pittsburgh, PA 15213-3890 Email: helen+ at cmu.edu From cl at andrew.cmu.edu Mon Jul 24 16:16:49 2000 From: cl at andrew.cmu.edu (Christian Lebiere) Date: Mon, 24 Jul 2000 16:16:49 -0400 Subject: ACT-R Workshop Schedule Message-ID: Below is a tentative schedule for the ACT-R workshop from August 5 to 7. Please let me know asap if you notice any error or omission. Informal contributions of up to 8 pages must be submitted by AUGUST 1 at the latest for inclusion in the workshop proceedings. Contributions can be in the form of a paper, extended abstract or presentation slides and must be submitted electronically. Until Friday July 28 they should be sent to cl+ at cmu.edu. AFTER JULY 28 THE CONTRIBUTIONS SHOULD BE SENT TO: db30+ at andrew.cmu.edu 2000 ACT-R Workshop Schedule ============================ All the lectures take place in Adamson Wing, Baker Hall 136A. The Saturday lunch demo session takes place in Baker Hall 332P. The Saturday evening dinner party takes place at 217 S. Dallas. Saturday August 5 ----------------- 9:00am Presentation Session 1 Wolfgang Schoppek An ACT-R model of the interaction between trained Deborah Boehm-Davis airline pilots and the flight management system John Stricker Integrating visual and motor reponses and visual Sandra Marshall imagery in a simple dynamic environment Dario Salvucci ACT-R and driving 10:15am Break 10:30am Presentation Session 2 Lael Schooler Does ACT-R's activation equation reflect the environment of early hominids? Erik Altmann Retrieval threshold adaptivity Alexander Petrov ANCHOR: A memory-based model of category rating Marsha Lovett (Not) just another model of the Stroop effect 12:10pm Lunch Break 1:00pm Demo Session 2:00pm Invited Session Herbert A. Simon Issues of methodology in using empirical data to test computational theories of cognition John R. Anderson Reply All Discussion 6:00pm Dinner Party Sunday August 6 --------------- 9:00am Presentation Session 3 Michael Schoelles Empirical test of the Argus Prime ACT-R/PM model Wayne Gray at the unit task level Mike Byrne Modeling search of computer displays in ACT-R/PM Wayne Gray Captain Nemo: A software engineering approach to Susan Kirschenbaum contructing a plausible model for Project Nemo Frank Lee An ACT-R model of GT-ASP 10:40am Break 11:00am Summer School Research Projects 12:10pm Lunch Break 1:00pm Special Session 1 Panel Discussion Applications of cognitive architectures Kevin Gluck, Dario Salvucci, Frank Ritter, Steve Blessing, Christian Lebiere 3:00pm Break 3:30pm Special Session 2 Christian Lebiere ACT-R 5.0 Mike Byrne RPM 2.0 Discussion The future of ACT-R Monday August 7 --------------- 9:00am Presentation Session 4 Hedderik van Rijn An ACT-R model of lexical decision Eric-Jan Wagenmakers Niels Taatgen Why do children learn to say "broke"? John Anderson A model of learning the past tense Raluca Budiu An ACT-R model for judging metaphoric sentences and John Anderson learning metaphors John Anderson Learning from instructions 10:40am Break 10:50am Presentation Session 5 Roman Belavkin Adding a theory of motivation to ACT-R Frank Ritter Kenning Marchant Legal rules as cognitive grammars in an ACT-R framework Richard Young A new rational framework for modelling exploratory Anna Cox device learning ... but does it fit with ACT-R? Scott Sanner Achieving efficient and cognitively plausible learning Christian Lebiere in backgammon 12:30pm Workshop ends