From tkelley at mail.arl.mil Fri Oct 1 10:33:28 1999 From: tkelley at mail.arl.mil (tkelley at mail.arl.mil) Date: Fri, 1 Oct 1999 10:33:28 -0400 Subject: Modeling the Brain Message-ID: Hello, We have a unique opportunity here at the Army Research Laboratory to begin a program which will model the dynamics and interactions of the human brain on our high speed computers. We have spent a large amount of money over the past few years on super computers and we now have some of the best computer facilities in the world. In order to utilize these computing resources to their full capabilities, we are looking for researchers who have proposals or ideas for modeling the interactions of the human brain on a high speed computer. This is program is free to researchers interested in this area. We are basically advertising our facilities to any researcher who is interested in this area. We are generally looking for academic participation (i.e. no contractors at this time). Please feel free to foward this e-mail to any researchers who you think might be interested in this area. If you are at all interested in using our super computers for brain modeling please contact me. At this point all you need to say is that you are interested, no proposals are needed at this time. Thanks, Troy Kelley 410-278-5859 From altmann at gmu.edu Thu Oct 7 11:58:56 1999 From: altmann at gmu.edu (ERIK M. ALTMANN) Date: Thu, 7 Oct 1999 11:58:56 -0400 (EDT) Subject: Error gradients through associative learning in R/PM Message-ID: Last week I introduced an associative-learning model that accounted for positional uncertainty and unpacked partial matching. In that model I made a dual code assumption, in which an element was represented by a positional code as well as an item code. There's independent empirical evidence for this distinction, but it wasn't directly constrained by ACT theory itself. It turns out that the distinction maps directly onto the dual-code representation used in ACT-R/PM's vision module. In preparation for moving visual attention, the vision module finds a new location pre-attentively and represents it in DM as a chunk. Cognition takes this chunk and cycles it back to vision as the target for attention. Vision then outputs a chunk representing the object at that location. In current R/PM, the visual-location and visual-objects chunks are linked to each other symbolically. That is, the name of one chunk is a slot value in the other chunk, in both directions. There are good reasons to do this, but it's not clear that perfect symbolic links are the most accurate assumption. If memory is subject to noise, then incorrect associations should be possible here as in any other memory representation. Moreover, some of Pylyshyn's studies at least indicate that visual indexes are subject to noise and can become re-bound to incorrect objects. If one relaxes the assumption that visual locations and visual objects are perfectly linked, and requires that links be formed by ACT-R's associative learning mechanism, then positional uncertainty falls out, kerplunk. In the resulting representation, a visual location generally maps to the corresponding visual object and vice versa, but pointers can go awry. Given the dynamics of base-level activation, locations or objects retrieved in error will be near neighbors (temporally) more often than they will be far neighbors, so association errors (and hence recall errors) will be near misses more often than far misses. The other wrinkle in this representation is that visual objects don't point to their own visual locations, but to the visual location of the *next* element. This provides an efficient episodic representation in which there is some redundancy between visual and object codes, in that locations point to objects and objects point to neighboring locations, but there is also a way to trace forward through sequences of events, something we seem to do quite naturally when tracing episodes in our mind's eye. Files: hfac.gmu.edu/people/altmann/nairne-rpm.txt Model code and R/PM mods hfac.gmu.edu/people/altmann/nairne-rpm.xl Model fits ----------------------- Erik M. Altmann Psychology 2E5 George Mason University Fairfax, VA 22030 703-993-1326 altmann at gmu.edu hfac.gmu.edu/~altmann ----------------------- From gaj at psychology.nottingham.ac.uk Fri Oct 22 10:49:04 1999 From: gaj at psychology.nottingham.ac.uk (Gary Jones) Date: Fri, 22 Oct 1999 15:49:04 +0100 Subject: skipping steps Message-ID: Hello, I remember some years ago somebody did a model of skipping steps (I think it was a math task but I'm not sure). If anyone can remember this and can point me to a paper of it, I'd be most grateful. Thanks, Gary. Gary Jones Psychology Department University of Nottingham Nottingham NG7 2RD England E-mail: gaj at Psychology.Nottingham.AC.UK Web: http://www.psychology.nottingham.ac.uk/staff/Gary.Jones/ From blessing at carnegielearning.com Fri Oct 22 11:23:51 1999 From: blessing at carnegielearning.com (Stephen Blessing) Date: Fri, 22 Oct 1999 11:23:51 -0400 Subject: skipping steps Message-ID: on 10/22/99 10:49 AM, Gary Jones at gaj at psychology.nottingham.ac.uk wrote: > I remember some years ago somebody did a model of skipping steps (I think > it was a math task but I'm not sure). If anyone can remember this and can > point me to a paper of it, I'd be most grateful. Well, since I did this, hopefully I can remember it. Here's its reference: Blessing, S. B. & Anderson, J. R. (1996). How people learn to skip steps. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22, 576-598. The task was an isomorph of simple algebra. If I remember correctly, this was done under ACT-R 3.0, and a slightly modified analogy mechanism (which may or may not now correspond to what's in 4.0). Unfortunately, I haven't updated the actual model since that time. The paper will hopefully give you all the info you need if you want to build on it in the current version of ACT-R. Let me know if you have any further questions, though (if prompted, I may be able to find the code, but again, I'm not sure how useful it may be at this point). Steve ******************************************************************* Dr. Stephen Blessing Carnegie Learning, Inc. 372 N. Craig St., Suite 101 Pittsburgh, PA 15213 Voice: (412) 683-6284, Fax: (412) 683-0544 From koedinger at CMU.EDU Sun Oct 24 08:58:30 1999 From: koedinger at CMU.EDU (Ken Koedinger) Date: Sun, 24 Oct 1999 08:58:30 -0400 Subject: skipping steps Message-ID: At 11:23 AM -0400 10/22/99, Stephen Blessing wrote: >on 10/22/99 10:49 AM, Gary Jones at gaj at psychology.nottingham.ac.uk wrote: > >> I remember some years ago somebody did a model of skipping steps (I think >> it was a math task but I'm not sure). If anyone can remember this and can >> point me to a paper of it, I'd be most grateful. > >Well, since I did this, hopefully I can remember it. Here's its reference: > >Blessing, S. B. & Anderson, J. R. (1996). How people learn to skip steps. >Journal of Experimental Psychology: Learning, Memory, and Cognition, 22, >576-598. > >The task was an isomorph of simple algebra. If I remember correctly, this >was done under ACT-R 3.0, and a slightly modified analogy mechanism (which >may or may not now correspond to what's in 4.0). Unfortunately, I haven't >updated the actual model since that time. The paper will hopefully give you >all the info you need if you want to build on it in the current version of >ACT-R. Let me know if you have any further questions, though (if prompted, I >may be able to find the code, but again, I'm not sure how useful it may be >at this point). > >Steve > >******************************************************************* >Dr. Stephen Blessing < >Carnegie Learning, Inc. < >372 N. Craig St., Suite 101 >Pittsburgh, PA 15213 >Voice: (412) 683-6284, Fax: (412) 683-0544 Gary, I also did a model of step skipping: TimesKoedinger, K.R., & Anderson, J.R. (1990). Abstract planning and perceptual chunks: Elements of expertise in geometry. Cognitive Science, 14, 511-550. This was a model of how diagrammatic knowledge is used to guide effective planning. The model was schema-based, not in ACT-R, but there is a discussion of the possibilities and challanges of implementing it in an ACT production system. Cheers, Ken ________________________________________________________________ Kenneth R. Koedinger Human-Computer Interaction Institute Carnegie Mellon University Phone: 412-268-7667 5000 Forbes Avenue Fax: 412-268-1266 Pittsburgh, PA 15213-3891 koedinger at cs.cmu.edu Home page: http://act.psy.cmu.edu/ACT/people/koedinger.html ________________________________________________________________ From wschoppe at mason2.gmu.edu Tue Oct 26 11:34:28 1999 From: wschoppe at mason2.gmu.edu (wschoppe at mason2.gmu.edu) Date: Tue, 26 Oct 1999 11:34:28 -0400 Subject: general concepts Message-ID: In my development of a generic ACT-R model that enacts GOMS models, I encountered the following problem which seems to be general enough to be discussed here. It is about the relation between specific instances and general concepts as illustrated by the following chunks: (specific-instance isa state-of-the-world slot1 A slot2 B slot3 C ) (general-concept isa state-of-the-world slot1 A slot2 any-chunk slot3 any-chunk ) "any-chunk" stands for the notion that any chunk should math here. What I would like to model is that the general concept matches when the specific instance is on the goal stack. The following example of a production that tries to retrieve a selection rule may illustrate that (for several reasons, I would like to represent the selection rules declaratively). (p retrieve-selection-rule =goal> isa generic-goal world =world ;=world is bound to specific-instance method nil ... =s-rule> isa selection-rule world =world ;suppose there is a selection rule with general-concept in its world slot method =method ==> =goal> method =method ... ) One solution to the problem would be the use of partial matching and setting high similarities between specific instances and general concepts. But because specific instances are created during runtime, this would be quite a hack. Another thing I would like to have in that model is that specific-instance spreads activation to general-concept, because it seems reasonable that seeing a specific instance in the environment should enhance the retrieval of the corresponding general concept. Again, that could be accomplished by writing a function that sets IAs between newly created state-of-the-world chunks and the corresponding general concepts. All that could be avoided if there was a special chunk "any-chunk" which literally matched any chunk. Generally, the described problem should always occur in models of object hierarchies. Has anybody thought about similar things, come up with a solution, or even thought about including such a feature in the architecture? Wolfgang P.S. Another example for the use of "any-chunk" is the concept of "multiplication with zero": (mult-zero isa ... fact1 0 fact2 any-chunk prod 0 ) From card at parc.xerox.com Wed Oct 27 20:08:37 1999 From: card at parc.xerox.com (Stuart Card) Date: Wed, 27 Oct 1999 17:08:37 PDT Subject: general concepts Message-ID: I can't quite resist commenting on the irony of this, since GOMS was developed specifically as a way of simplifying what was originally a production system model. I'm interested in the model when you get it finished. --Stuart Card At 08:34 AM 10/26/1999 , wschoppe at mason2.gmu.edu wrote: >In my development of a generic ACT-R model that enacts GOMS models, I >encountered the following problem which seems to be general enough to be >discussed here. >It is about the relation between specific instances and general concepts >as illustrated by the following chunks: >