From taatgen+ at andrew.cmu.edu Mon Sep 13 10:06:12 1999 From: taatgen+ at andrew.cmu.edu (Niels Taatgen) Date: Mon, 13 Sep 1999 10:06:12 -0400 Subject: Schema-based models? Message-ID: Benjamin A Maclaren wrote: > Are there papers on schema-based models in ACT-R 4.0, preferably with a > model I could check out? I am assuming that these involve a "schema > activation" phase and then a goal driven phase to try to instantiate a > retrieved schema. Ben, you might want to look at some modeling I recently did for my thesis. It involves a model that uses a representation similar to schema's, which are retrieved from memory, modified for different purposes, and eventually proceduralized. It is described in my thesis (chapter 7), which you can retrieve from its webpage: http://tcw2.ppsw.rug.nl/~niels/thesis This page also has the models themselves on it, although they are fairly large... Alternatively, a summary of the model is in the paper is in de proceedings of the last cognitive science conference, although it lacks technical details. (to be retrieved from: http://tcw2.ppsw.rug.nl/prepublications/prepubsTCW-99-4.pdf) Niels Taatgen From altmann at gmu.edu Fri Sep 17 11:02:46 1999 From: altmann at gmu.edu (ERIK M. ALTMANN) Date: Fri, 17 Sep 1999 11:02:46 -0400 (EDT) Subject: Activation noise vs. number of chunks Message-ID: Is there a way to express the number of chunks in memory in terms of activation noise in the chunk choice equation? The question comes up if you assume no retrieval threshold -- that memory always returns something, for example in a forced-choice task. Error is then governed by the chunk choice equation, which has multiple parameters. Noise is one of them, but number of chunks is another. If you assume that chunks are added to memory at a fairly constant rate and each used about the same number of times, then only noise and chunk number are left. I'm using noise to stand for chunk number when the retention interval spans many hours and there would be hundreds of thousands of chunks to compute over. As far as I've worked it out, linear increases in noise and chunk number both decrease retrieval probability by negatively-accelerating amounts, but I wonder if one actually reduces to the other in a convenient way. Erik. ----------------------- Erik M. Altmann Psychology 2E5 George Mason University Fairfax, VA 22030 703-993-1326 altmann at gmu.edu hfac.gmu.edu/~altmann ----------------------- From bcm144 at email.psu.edu Sun Sep 19 22:42:48 1999 From: bcm144 at email.psu.edu (Bianca Moravec Sumutka) Date: Sun, 19 Sep 1999 22:42:48 -0400 Subject: Lexical Decision Message-ID: Are there any existing ACT-R models of monolingual lexical decision? Thanks, -Bianca From db30+ at andrew.cmu.edu Wed Sep 22 11:29:10 1999 From: db30+ at andrew.cmu.edu (Daniel J Bothell) Date: Wed, 22 Sep 1999 11:29:10 -0400 (EDT) Subject: ACT-R environment on the web Message-ID: ACT-R users, We are testing out a new way to make ACT-R available for modeling and want to know what you think of it. We are working on implementing the ACT-R environment as a web based utility. The environment runs on our server and all you need is a web browser to use it. This is a very early test of the system, so it is still pretty rough, but with your feedback, we will work to make it a useful tool for ACT-R modeling. The server can be accessed from a link on the ACT-R web page (http://act.psy.cmu.edu) or directly at: http://128.2.248.57/. The first time you connect you need to create your username and password. All of the models you create will be associated with that username, so if you want to return to a model you need to use the same name. After you log on, you will get a web page with the link to the environment at the top and some brief directions on how to use it as well as a list of some of the current bugs. If you have any questions or problems using it please let me know. If you try it out, please let us know what you think about it, so that we can work to make it better for everyone. Thank you, Dan From cl at andrew.cmu.edu Thu Sep 23 15:22:02 1999 From: cl at andrew.cmu.edu (Christian Lebiere) Date: Thu, 23 Sep 1999 15:22:02 -0400 Subject: PGH PA CMU Research Programmer Message-ID: A programmer is needed to assist in the cognitive modeling of human performance in complex real-time dynamic simulation environments. Responsibilities include programming interface components between the simulation and the modeling environment, assisting in the development of cognitive models of human performance, performing data analysis and reporting research results. This position can be either full-time or part-time. BS or equivalent experience in cognitive science or computer science and programming experience in Lisp are required. Experience with production system (especially ACT-R) programming and cognitive modeling research is desirable. For further information on the ACT-R project, see the ACT web site at http://act.psy.cmu.edu Contact: Christian Lebiere Psychology Deparment Carnegie Mellon University Pittsburgh, PA 15213 Tel: +1 (412) 268-2815 Fax: +1 (412) 268-2844 Email: cl+ at cmu.edu From gray at gmu.edu Fri Sep 24 17:30:04 1999 From: gray at gmu.edu (Wayne Gray) Date: Fri, 24 Sep 1999 17:30:04 -0400 Subject: ACT-R 99 Message-ID: Greeting. After some delay, we have the ACT-R '99 Workshop site on line. Go to http://hfac.gmu.edu/actr99/picture. Cheers, Wayne _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Wayne D. Gray HUMAN FACTORS & APPLIED COGNITIVE PROGRAM SNAIL-MAIL ADDRESS (FedX et al) VOICE: +1 (703) 993-1357 George Mason University FAX: +1 (703) 993-1330 ARCH Lab/HFAC Program ********************* MSN 3f5 * Work is infinite, * Fairfax, VA 22030-4444 * time is finite, * http://hfac.gmu.edu * plan accordingly. * _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ From altmann at gmu.edu Tue Sep 28 12:15:22 1999 From: altmann at gmu.edu (ERIK M. ALTMANN) Date: Tue, 28 Sep 1999 12:15:22 -0400 (EDT) Subject: Error gradients through associative learning Message-ID: At the workshop this summer I was intrigued at how partial matching crept kudzu-like into so many conversations, and this led me to think some more about errors and how they might occur as a function of experience with the environment. Below is an introduction to a positional-confusion model that I hope to present more formally soon. Comments, criticisms, bomb threats, all are welcome. Erik. ----------------------- Erik M. Altmann Psychology 2E5 George Mason University Fairfax, VA 22030 703-993-1326 altmann at gmu.edu hfac.gmu.edu/~altmann ----------------------- Motivations (1) Account for error gradients within the ACT-R theory. By "within the theory" I exclude ACT-R's partial matching mechanism, which is less an explanation than a representation or re-description of the effect, which and is isolated from other theoretical constraints (in particular, learning). (2) Improve on Estes's perturbation model, another popular model of error gradients that fails to address the construction of the underlying memory representation. The perturbation model assumes that memory is organized as an array along the dimension of distortion, presupposing a higher-order memory structure without explaining how it might arise. This is misleading in the sense that it suggests implicitly that multi-dimensional arrays are a natural way to organize human memory (if it were so, perhaps we would have had ARRP instead of LISP). (3) Test some of my serial-attention assumptions, including frequent refresh of the goal chunk from declarative memory, a limited goal stack, and no effective retrieval threshold (meaning that memory always returns something, so that interference becomes as important as decay in determining error). (4) Link error gradients, as one particular class of error, back to patterns of interaction with the environment, with the hope of making predictions about errors in interactive behavior. Current results (1) An associative-learning model that reproduces positional gradients and serial position curves in memory for order. A study by Nairne (1992) tested implicit memory for order with retention intervals ranging from 30 sec to 24 hours. The model fits Nairne's data with R^2 = .96 and RMSE = 3.7% over 75 data points, improving both on his application of the perturbation model and on the ACT-R partial matching model reported in Anderson & Matessa (1997). (2) The AL model acquires the memory representation underlying the error pattern. That is, it does the whole task, from presentation through retention to test. Perturbation and partial matching don't address how the underlying representations might be acquired in the first place. (3) The AL model is distinguished from the perturbation model by predictions about primacy and recency. Perturbation seems to predict primacy equal to recency, though at one point it predicted primacy less than recency (Lee & Estes, 1977). In contrast, the AL model predicts primacy greater than recency, because items are linked associatively in the forward direction, and later items are more likely to be preceded by bad links. It turns out that primacy is greater than recency in Nairne's conditions and elsewhere (e.g., Healy, 1971, cited in Estes 1997). However, the difference isn't discussed and no statistical tests are reported. (4) It's not clear that the partial-matching mechanism explains much variance in the cognitive arithmetic data, either. The Siegler in Chapter 3 actually improves slightly if one turns off partial matching and removes the production syntax that constrains chunk retrieval. (See Files, below.) I haven't tested this change in the more complex models in Chapter 9. Summary of processing in the AL model At presentation 1. "Move attention" to a new item. This adds two chunks to memory, a cue and a target. The cue can be intepreted as a positional code and the target as the item itself, but this is somewhat arbitrary; the point is that these two chunks make up the current item. 2. Retrieve the cue from memory, with the previous target in the focus of attention. This creates a link from the previous target to the current cue. 3. Retrieve the target from memory, with the cue in the focus of attention. This creates a link from the current cue to the current target. 4. Go to 1. At recall 1. Retrieve an unretrieved cue, with the previous target in the focus of attention as an activation source. People know what's "unretrieved" because all presented items and all retrieved items are visible. 2. Retrieve an unretrieved target, with the cue in the focus of attention as an activation source. 3. Go to 1. Emergence of effects in the AL model Positional gradients are a function of retrieval errors on the cue or target, on steps 2 or 3, respectively, during presentation. A retrieval error creates an association error, or a link from a cue to the wrong target (or target to wrong cue). More recent chunks are more active, so more likely to intrude as retrieval errors. The resulting pattern of association errors, in which near neighbors are more often incorrectly linked than far neighbors, causes more near misses than far misses at recall time. Reduced accuracy over time is a function of retrieval errors at recall that are due to decay and to increased interference from other chunks. Thus the gradients themselves don't decay, which they don't seem to in people (Nairne's 24 hour condition). The bow in the serial position curve arises because elements in the middle of the list have more near neighbors than elements at the ends of the list, so there is more opportunity for incorrect associations to govern retrieval at recall. Primacy is greater than recency because chunks are linked in a forward direction, and the probability of error is cumulative as one moves through the this, at presentation and at recall. The effect is to rotate the serial position curve clockwise slightly. Other assumptions 1. Something is always retrieved at recall time, reflecting the forced-choice nature of the task. If an "episodic" chunk for a cue or target is not retrieved, then a "semantic" chunk for that cue or target is retrieved instead. The difference is that the episodic chunk is linked into the associative chain and the semantic chunk is not. If a semantic chunk for target T (or cue C) is retrieved, then T (or C) is "used up" so the episodic chunk will not be retrieved (on that trial). 2. Cognition produces chunks at some relatively constant rate, so retention should represented both in terms of decay (of cue and target chunks) and in terms of increased interference from other chunks. Each chunk adds a term to the denominator of the chunk-choice equation. However, that many chunks are expensive to compute over, and one gets essentially the same effect by increasing noise linearly over time. Files hfac.gmu.edu/people/altmann/nairne.txt Associative learning model hfac.gmu.edu/people/altmann/nairne.xl Model fits hfac.gmu.edu/people/altmann/siegler-ema.txt Modified Siegler model hfac.gmu.edu/people/altmann/siegler-ema.xl Model fits