From cl at andrew.cmu.edu Fri Jan 7 09:49:32 2000 From: cl at andrew.cmu.edu (Christian Lebiere) Date: Fri, 07 Jan 2000 09:49:32 -0500 Subject: Fwd: funding opportunity Message-ID: Forwarded message from Susan Chipman (chipmas at onr.navy.mil): Last summer at the workshop I announced an interest in seeing someone develop an ACT-R representation of psychological space. At that time I was uncertain whether I would have any money available for this and I did not hear from anyone. Right now it does look as if I have some money available, although that money may disappear if I do not get a proposal soon. If I can locate the information in my files, I will attach some references relevant to work on psychological space. Currently, ONR has been supporting some research in this area as related to virtual reality and the understanding of space that results from VR experience. However, these researchers are not developing formal models, either mathematical or computational. Any ACT-R project should coordinate with these efforts. This could be a good opportunity for a young investigator who knows ACT-R. I can give grants to foreign investigators, although to do so I need to argue that no one in the U.S. is available to do it. In practice, this means that standards are a bit higher for foreign grants. Anyone interested in doing such a project should get in touch with me by email ASAP. Selected abstracts of research about different aspects of the psychological representation of space are available upon request. Susan F. Chipman, Ph.D. Office of Naval Research, Code 342 800 N. Quincy Street Arlington, VA 22217-5660 phone: 703-696-4318 Fax: 703-696-1212 From cl at andrew.cmu.edu Fri Jan 14 14:01:06 2000 From: cl at andrew.cmu.edu (Christian Lebiere) Date: Fri, 14 Jan 2000 14:01:06 -0500 Subject: No subject Message-ID: chi-announcements at acm.org, announce at ppig.org, cedm-tg-l at msstate.edu Subject: 2000 ACT-R Summer School and Workshop SEVENTH ANNUAL ACT-R SUMMER SCHOOL AND WORKSHOP =============================================== Carnegie Mellon University - July/August 2000 ============================================= ACT-R is a cognitive theory and simulation system for developing cognitive models for tasks that vary from simple reaction time to air traffic control. The most recent advances of the ACT-R theory were detailed in the recent book "The Atomic Components of Thought" by John R. Anderson and Christian Lebiere, published in 1998 by Lawrence Erlbaum Associates. Each year, a two-week summer school is held to train researchers in the use of the ACT-R system, followed by a three-day workshop to enable new and current users to exchange research results and ideas. The Seventh Annual ACT-R Summer School and Workshop will be held at Carnegie Mellon University in Pittsburgh in July/August 2000. SUMMER SCHOOL: The summer school will take place from Monday July 24 to Friday August 4, with the intervening Sunday free. This intensive 11-day course is designed to train researchers in the use of ACT-R for cognitive modeling. It is structured as a set of 8 units, with each unit lasting a day and involving a morning theory lecture, a web-based tutorial, an afternoon discussion session and a homework assignment which students are expected to complete during the day and evening. The final three days of the summer school will be devoted to individual research projects. Computing facilities for the tutorials, assignments and research projects will be provided. Successful student projects will be presented at the workshop, which all summer school students are expected to attend as part of their training. To provide an optimal learning environment, admission is limited to a dozen participants, who must submit by APRIL 1 an application consisting of a curriculum vitae, a statement of purpose and a one-page description of the data set that they intend to model as their research project. The data set can be the applicant's own or can be taken from the published literature. Applicants will be notified of admission by APRIL 15. Admission to the summer school is free. A stipend of up to $750 is available to graduate students for reimbursement of travel, housing and meal expenses. To qualify for the stipend, students must be US citizens and join to their application a letter of reference from a faculty member. WORKSHOP: The workshop will take place from the morning of Saturday August 5 to Monday August 7 at noon. Mornings will be devoted to research presentations, each lasting about 20 minutes plus questions. Participants are invited to present their ACT-R research by submitting a one-page abstract with their registration. Informal contributions of up to 8 pages can be submitted by August 1 for inclusion in the workshop proceedings. Afternoons will feature more research presentations as well as discussion sessions and instructional tutorials. Suggestions for the topics of the tutorials and discussion sessions are welcome. Evenings will be occupied by demonstration sessions, during which participants can gain a more detailed knowledge of the models presented and engage in unstructured discussions. Admission to the workshop is open to all. The early registration fee (before July 1) is $100 and the late registration fee (after July 1) is $125. A registration form is appended below. Additional information (detailed schedule, etc.) will appear on the ACT-R Web site (http://act.psy.cmu.edu/) when available or can be requested at: 2000 ACT-R Summer School and Workshop Psychology Department Attn: Helen Borek Baker Hall 345C Fax: +1 (412) 268-2844 Carnegie Mellon University Tel: +1 (412) 268-3438 Pittsburgh, PA 15213-3890 Email: helen+ at cmu.edu ________________________________________________________ Seventh Annual ACT-R Summer School and Workshop July 24 to August 7, 2000 at Carnegie Mellon University in Pittsburgh REGISTRATION ============ Name: .................................................................. Address: .................................................................. .................................................................. .................................................................. Tel/Fax: .................................................................. Email: .................................................................. Summer School (July 24 to August 4): ........ (check here to apply) ==================================== Applications are due APRIL 1. Acceptance will be notified by APRIL 15. Applicants MUST include a curriculum vitae, a short statement of purpose and a one-page description of the data set that they intend to model. A stipend of up to $750 is available for the reimbursement of travel, lodging and meal expenses (receipts needed). To qualify for the stipend, the applicant must be a graduate student with US citizenship and include with the application a letter of reference from a faculty member. Check here to apply for stipend: ........ Workshop (August 5 to 7): ........ (check here to register) ========================= Presentation topic (optional - include one-page abstract with registration): .......................................................................... Registration fee: Before July 1: $100 ... After July 1: $125 ... The fee is due upon registration. Please send checks or money orders only. We cannot accept credit cards. HOUSING ======= Housing is available in Resnick House, a CMU dormitory that offers suite-style accommodations. Rooms include air-conditioning, a semi-private bathroom and a common living room for suite-mates. Last year's rates were $180.75/week/person or $32.60/night/person for single rooms and $134.25/week/person or $24.25/night/person for double rooms. Housing reservations will be taken after acceptance to the summer school. Do not send money. See http://www.housing.cmu.edu/conferences/ for further housing information. To reserve a room in Resnick House, fill in the dates and select one of the three room options: I will stay from ................ to ................ 1. ... I want a single room 2. ... I want a double room and I will room with ................ 3. ... I want a double room. Please select a roommate of ....... gender ROOM PAYMENT IS DUE UPON CHECK-IN. DO NOT SEND MONEY. The recommended hotel is the Holiday Inn University Center, located on the campus of the University of Pittsburgh within easy walking distance of CMU. Contact the Holiday Inn directly at +1 (412) 682-6200. Send this form to: 2000 ACT-R Summer School and Workshop Psychology Department Attn: Helen Borek Baker Hall 345C Fax: +1 (412) 268-2844 Carnegie Mellon University Tel: +1 (412) 268-3438 Pittsburgh, PA 15213-3890 Email: helen+ at cmu.edu From r.m.young at herts.ac.uk Tue Jan 25 13:23:22 2000 From: r.m.young at herts.ac.uk (Richard M Young) Date: Tue, 25 Jan 2000 18:23:22 +0000 Subject: Learning facts in ACT Message-ID: ACTors: I have an embarrassingly simple question, or set of related questions, about fact-learning in ACT. For the purpose of clarity, I'll pose the question in the context of learning a set of paired-associates, although I think the point is more general. I suspect the answer already exists in a model somewhere, and I just need to be pointed to it. Let's take as a starting point the (obviously over-simplified) model of paired-associate learning and retrieval in the file "paired associate" in Unit 6 of the ACT tutorial. The crucial part is two rules (there are only three anyway), one of which retrieves the "pair" if it can, and if it can't the other comes into play and "studies" the pair as it is presented. As is pointed out in the tutorial, the retrieval rule serves to reinforce the activation of the pair twice, once because it is retrieved on the LHS of the rule, and once more when the pair is re-formed from being popped from the goalstack on the RHS. Notice that "studying" only boosts the activation of the pair once, when it is formed (or re-formed) on the RHS. I got to wondering what would happen if the modelled S ever got into its/his/her head an INCORRECT pair, i.e. with a valid stimulus paired with an incorrect response. As the model stands, the error would never be corrected, because the erroneous chunk would repeatedly be retrieved, and would be reinforced (twice) each time. However, it is probably unrealistic to suppose that S doesn't read the feedback just because a response has been retrieved, so there is the opportunity to notice that the retrieved response is wrong and to "study" the correct response. However, each time that happens, the erroneous chunk gets reinforced twice but the correct chunk only once, as we have seen. So, given that the erroneous chunks starts off more active than the correct one, except for a vanishingly low probability sequence of events, the correct chunk would never get learned to the point of being retrieved. OK, so it's a crazily over-simplified model, but it does raise the question of how *would* ACT learn paired associates given that it starts off with, or at any stage acquires, erroneous pairs? I've thought of a couple of ways, but I'm not even sure they'd really work, and they certainly don't seem like convincing stories: (1) Because a retrieval is not guaranteed to be correct, it should not automatically be popped on the RHS of a retrieval rule. If the model waits for feedback and makes sure it pops only a correct pair, then a correct chunk will be reinforced (once) on each trial. Unfortunately, the erroneous chunk also gets reinforced once, by being retrieved on the LHS. Because the correct chunk is reinforced AFTER the erroneous one, it profits from recency, and I suppose it's possible that with patience and some luck with the noise, on some occasion the two chunks will be close enough in activation that the correct pair gets retrieved and therefore twice reinforced, and thereafter is likely to win. But the story doesn't sound convincing. (And solutions which involve the repeated, deliberate, multiple rehearsal of the correct chunk sound too contrived.) (2) When an erroneous retrieval occurs, and the model discovers that it's wrong from the feedback, as well as learning a correct pair it could also learn the incorrect pair with an additional annotation (attribute) of "wrong". The retrieval would need to become more elaborate: after retrieving a pair in reply to a probe with the stimulus, the model would check whether it could also retrieve an extended pair marked wrong using the stimulus and the retrieved response. If it couldn't, OK. If it could, then it would need to retrieve another pair, with the previous response explicitly negated. (I think that's possible). Well, maybe, but again it seems rather contrived. Can anyone tell me how this is done better? -- Richard From cschunn at gmu.edu Tue Jan 25 14:28:31 2000 From: cschunn at gmu.edu (Christian Schunn) Date: Tue, 25 Jan 2000 14:28:31 -0500 Subject: Learning facts in ACT Message-ID: Learning addition (or multiplication) facts is a nice example where these issues come up. What one observes empirically is that children continue to recompute answers even after they have started to sometimes retrieve the answer to a problem. The result of this conservative approach is that they don't get locked into overlearning incorrect facts. I think data from this domain provides evidence that ACT-R's "retrieve if you can" approach to strategy selection is likely to be wrong. However, I suppose you might address some of these issues by setting the retrieval threshold quite high. See the Schunn, Reder et al (1997) JEP:LMC on the SAC model or Bob Siegler's (1995) ASCM model for how an alternative model of this process might work. -Chris At 6:23 PM +0000 1/25/00, Richard M Young wrote: >ACTors: > >I have an embarrassingly simple question, or set of related questions, >about fact-learning in ACT. For the purpose of clarity, I'll pose the >question in the context of learning a set of paired-associates, although I >think the point is more general. I suspect the answer already exists in a >model somewhere, and I just need to be pointed to it. > >Let's take as a starting point the (obviously over-simplified) model of >paired-associate learning and retrieval in the file "paired associate" in >Unit 6 of the ACT tutorial. The crucial part is two rules (there are only >three anyway), one of which retrieves the "pair" if it can, and if it can't >the other comes into play and "studies" the pair as it is presented. As is >pointed out in the tutorial, the retrieval rule serves to reinforce the >activation of the pair twice, once because it is retrieved on the LHS of >the rule, and once more when the pair is re-formed from being popped from >the goalstack on the RHS. Notice that "studying" only boosts the >activation of the pair once, when it is formed (or re-formed) on the RHS. > >I got to wondering what would happen if the modelled S ever got into >its/his/her head an INCORRECT pair, i.e. with a valid stimulus paired with >an incorrect response. As the model stands, the error would never be >corrected, because the erroneous chunk would repeatedly be retrieved, and >would be reinforced (twice) each time. However, it is probably unrealistic >to suppose that S doesn't read the feedback just because a response has >been retrieved, so there is the opportunity to notice that the retrieved >response is wrong and to "study" the correct response. However, each time >that happens, the erroneous chunk gets reinforced twice but the correct >chunk only once, as we have seen. So, given that the erroneous chunks >starts off more active than the correct one, except for a vanishingly low >probability sequence of events, the correct chunk would never get learned >to the point of being retrieved. > >OK, so it's a crazily over-simplified model, but it does raise the question >of how *would* ACT learn paired associates given that it starts off with, >or at any stage acquires, erroneous pairs? I've thought of a couple of >ways, but I'm not even sure they'd really work, and they certainly don't >seem like convincing stories: > >(1) Because a retrieval is not guaranteed to be correct, it should not >automatically be popped on the RHS of a retrieval rule. If the model waits >for feedback and makes sure it pops only a correct pair, then a correct >chunk will be reinforced (once) on each trial. Unfortunately, the >erroneous chunk also gets reinforced once, by being retrieved on the LHS. >Because the correct chunk is reinforced AFTER the erroneous one, it profits >from recency, and I suppose it's possible that with patience and some luck >with the noise, on some occasion the two chunks will be close enough in >activation that the correct pair gets retrieved and therefore twice >reinforced, and thereafter is likely to win. But the story doesn't sound >convincing. (And solutions which involve the repeated, deliberate, >multiple rehearsal of the correct chunk sound too contrived.) > >(2) When an erroneous retrieval occurs, and the model discovers that it's >wrong from the feedback, as well as learning a correct pair it could also >learn the incorrect pair with an additional annotation (attribute) of >"wrong". The retrieval would need to become more elaborate: after >retrieving a pair in reply to a probe with the stimulus, the model would >check whether it could also retrieve an extended pair marked wrong using >the stimulus and the retrieved response. If it couldn't, OK. If it could, >then it would need to retrieve another pair, with the previous response >explicitly negated. (I think that's possible). Well, maybe, but again it >seems rather contrived. > >Can anyone tell me how this is done better? > >-- Richard ====================================================== Christian Schunn Applied Cognitive Program Psychology 3F5 cschunn at gmu.edu George Mason University (703)-993-1744 Voice Fairfax, VA 22030-4444 (703)-993-1330 Fax http://www.hfac.gmu.edu/~schunn ====================================================== From gray at gmu.edu Tue Jan 25 18:09:34 2000 From: gray at gmu.edu (Wayne Gray) Date: Tue, 25 Jan 2000 18:09:34 -0500 Subject: Learning facts in ACT Message-ID: Interesting question. When I was a graduate student, my mentor Leo Postman was a verbal learner. Such problems as Richard raises were the meat and potatoes of verbal learning. But the verbal learners quit the field during the 80's and no one took up their puzzles -- until now maybe. Clearly, what was lacking in those old theories was a mechanistic account that could be rigorously specified. So, first of all, most paired-associates (PAs) used in experiments have low pre-experimental association. Hence, the first mystery is how do the new and usually arbitrary associations get learned and used over the older and more established associations? Especially given the fact that learning pairs such as "locomotive-nurse" interferes minimally with pre-experimental associations such as "locomotive-black." I believe that Richard's essential insight is correct, the simple actr theories need to be regarded as teaching tools, and not as theories of PA learning. An answer to Richard's question is contained in models of switch cost for serial attention that Erik Altmann has been pursuing. Follow this link: http://www.hfac.gmu.edu/People/altmann/pubs.html Your best bet would be to download the three "manuscript submitted" papers. You might also be interested in glancing at the two Altmann and Gray (1999; 1998) Cognitive Science Conference papers as well. If the PA is A-B, when a Ss see's "A" what gets retrieved is not "B" but an episodic instruction of what to do with "A" NOW. This instruction might contain the pair AB, but whatever it contains, essentially it tells you to respond "B" NOW. It is this instruction (let's call it AB) that gets retrieved and strengthened. If you accidently retrieve an AC instruction, this will cause you to respond "C." However, you can overcome this response by allowing AC to decay while you massively rehearse AB. In the Altmann and Gray models we estimated that one rehearsal can occur every 100 msec. Hence, when the A-B pair is on the screen and the Ss has just say "C", the Ss can rehearse A-B about 10 times a second while ignoring C. The strength of C can decay while that of B increases. As memory returns the most active trace, for the most part it should return the most recently rehearsed trace, in this case the instruction AB that results in the Ss responding "B." Note that this approach leaves a lot of room for Proactive Interference. After you stop actively rehearsing B its strength decays rapidly with the consequence that the difference between its strength and C's strength lessens. With a few assumptions (built into actr) regarding noise, overtime you are likely to falsely retrieve C. Indeed, if C were on a first list of PAs and had been well learned, whereas B was on a second list of PAs and had been rehearsed only a few times, actr might well predict that the strength of the AB instruction would decay to be less than the strength of the AC instruction, with the result that (after an appropriate retention interval) AC would be returned rather than AB. I believe this approach can be just as easily applied to model retroactive interference. Anyone interested in pursuing this should read Postman's masterful summary of the verbal learning literature: Postman, L. (1971). Transfer, interference, and forgetting. In J. W. Kling & L. A. Riggs (Eds.), Woodworth & Scholsberg's Experimental Psychology (3rd ed., pp. 1019-1132). New York: Holt, Rinehart, and Winston, Inc. I will also throw in the following reference, as I used to be quite fond of it: Postman, L., & Gray, W. D. (1977). Maintenance of prior associations and proactive inhibition. Journal of Experimental Psychology: Human Learning and Memory, 3, 255-263. Cheers, Wayne > >At 6:23 PM +0000 1/25/00, Richard M Young wrote: >>ACTors: >> >>I have an embarrassingly simple question, or set of related questions, >>about fact-learning in ACT. For the purpose of clarity, I'll pose the >>question in the context of learning a set of paired-associates, although I >>think the point is more general. I suspect the answer already exists in a >>model somewhere, and I just need to be pointed to it. >> >>Let's take as a starting point the (obviously over-simplified) model of >>paired-associate learning and retrieval in the file "paired associate" in >>Unit 6 of the ACT tutorial. The crucial part is two rules (there are only >>three anyway), one of which retrieves the "pair" if it can, and if it can't >>the other comes into play and "studies" the pair as it is presented. As is >>pointed out in the tutorial, the retrieval rule serves to reinforce the >>activation of the pair twice, once because it is retrieved on the LHS of >>the rule, and once more when the pair is re-formed from being popped from >>the goalstack on the RHS. Notice that "studying" only boosts the >>activation of the pair once, when it is formed (or re-formed) on the RHS. >> >>I got to wondering what would happen if the modelled S ever got into >>its/his/her head an INCORRECT pair, i.e. with a valid stimulus paired with >>an incorrect response. As the model stands, the error would never be >>corrected, because the erroneous chunk would repeatedly be retrieved, and >>would be reinforced (twice) each time. However, it is probably unrealistic >>to suppose that S doesn't read the feedback just because a response has >>been retrieved, so there is the opportunity to notice that the retrieved >>response is wrong and to "study" the correct response. However, each time >>that happens, the erroneous chunk gets reinforced twice but the correct >>chunk only once, as we have seen. So, given that the erroneous chunks >>starts off more active than the correct one, except for a vanishingly low >>probability sequence of events, the correct chunk would never get learned >>to the point of being retrieved. >> >>OK, so it's a crazily over-simplified model, but it does raise the question >>of how *would* ACT learn paired associates given that it starts off with, >>or at any stage acquires, erroneous pairs? I've thought of a couple of >>ways, but I'm not even sure they'd really work, and they certainly don't >>seem like convincing stories: >> >>(1) Because a retrieval is not guaranteed to be correct, it should not >>automatically be popped on the RHS of a retrieval rule. If the model waits >>for feedback and makes sure it pops only a correct pair, then a correct >>chunk will be reinforced (once) on each trial. Unfortunately, the >>erroneous chunk also gets reinforced once, by being retrieved on the LHS. >>Because the correct chunk is reinforced AFTER the erroneous one, it profits >>from recency, and I suppose it's possible that with patience and some luck >>with the noise, on some occasion the two chunks will be close enough in >>activation that the correct pair gets retrieved and therefore twice >>reinforced, and thereafter is likely to win. But the story doesn't sound >>convincing. (And solutions which involve the repeated, deliberate, >>multiple rehearsal of the correct chunk sound too contrived.) >> >>(2) When an erroneous retrieval occurs, and the model discovers that it's >>wrong from the feedback, as well as learning a correct pair it could also >>learn the incorrect pair with an additional annotation (attribute) of >>"wrong". The retrieval would need to become more elaborate: after >>retrieving a pair in reply to a probe with the stimulus, the model would >>check whether it could also retrieve an extended pair marked wrong using >>the stimulus and the retrieved response. If it couldn't, OK. If it could, >>then it would need to retrieve another pair, with the previous response >>explicitly negated. (I think that's possible). Well, maybe, but again it >>seems rather contrived. >> >>Can anyone tell me how this is done better? >> >>-- Richard > >====================================================== > Christian Schunn Applied Cognitive Program > Psychology 3F5 cschunn at gmu.edu > George Mason University (703)-993-1744 Voice > Fairfax, VA 22030-4444 (703)-993-1330 Fax > http://www.hfac.gmu.edu/~schunn >====================================================== _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Wayne D. Gray HUMAN FACTORS & APPLIED COGNITIVE PROGRAM SNAIL-MAIL ADDRESS (FedX et al) VOICE: +1 (703) 993-1357 George Mason University FAX: +1 (703) 993-1330 ARCH Lab/HFAC Program ********************* MSN 3f5 * Work is infinite, * Fairfax, VA 22030-4444 * time is finite, * http://hfac.gmu.edu * plan accordingly. * _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ From kevin_gluck at hotmail.com Mon Jan 31 21:26:46 2000 From: kevin_gluck at hotmail.com (Kevin Gluck) Date: Mon, 31 Jan 2000 18:26:46 PST Subject: My apologies Message-ID: Dear Friends, Yesterday I forwarded an email requesting support for public broadcasting. Turns out it is a hoax chain letter. I should have known better. Please accept my sincere apologies for wasting your time. - Kevin ______________________________________________________ Get Your Private, Free Email at http://www.hotmail.com