From tkelley at arl.army.mil Thu Jan 4 16:38:39 2001 From: tkelley at arl.army.mil (Troy Kelley) Date: Thu, 4 Jan 2001 16:38:39 -0500 Subject: ACT-R PM Window Drawing Message-ID: I am attempting to load a picture into a window for ACT-R PM to search, but I must say I cannot find any examples of how to do such a thing. The examples Mike Byrne supplies are mainly for drawing buttons or small objects, instead, I would like to load a bitmap image into the window ACT-R PM is using. Any ideas? Thanks Troy Kelley From dario at cbr.com Thu Jan 4 17:09:20 2001 From: dario at cbr.com (Dario Salvucci) Date: Thu, 4 Jan 2001 17:09:20 -0500 Subject: how can ACT-R models age? Message-ID: aging? In particular, I'm wondering if anyone has done work toward the following question: Given a "young expert" ACT-R model, is there a general (domain-independent) way of making it an "elderly" model simply by changing appropriate parameters? For instance, one might imagine that cycle time increases by some percentage causing general slowdown (there seems to be EPIC work suggesting this), or that W decreases, and/or that the latency of certain perceptual-motor parameters increases. I'm specifically interested in modeling elderly drivers using an existing model of younger drivers, but I'm hoping to carry over any related results / parameter changes from other domains if at all possible. Thanks, and best wishes for the new year, Dario -------------------------------- Dario Salvucci Cambridge Basic Research Email: dario at cbr.com Info: http://www.cbr.com/~dario From ema at msu.edu Thu Jan 4 21:51:27 2001 From: ema at msu.edu (Erik M. Altmann) Date: Thu, 4 Jan 2001 18:51:27 -0800 Subject: how can ACT-R models age? Message-ID: At 5:09 PM -0500 1/4/01, Dario Salvucci wrote: >Might anyone know the current status of work relating ACT-R and >aging? In particular, I'm wondering if anyone has done work toward >the following question: Given a "young expert" ACT-R model, is there >a general (domain-independent) way of making it an "elderly" model >simply by changing appropriate parameters? For instance, one might >imagine that cycle time increases by some percentage causing general >slowdown (there seems to be EPIC work suggesting this), or that W >decreases, and/or that the latency of certain perceptual-motor >parameters increases. I'm specifically interested in modeling >elderly drivers using an existing model of younger drivers, but I'm >hoping to carry over any related results / parameter changes from >other domains if at all possible. I've been thinking about a representation of age in which the brain fills up with chunks that aren't completely decayed. For this to have typical aging effects, I believe you have to dispense with the retrieval threshold and particularly with indexed retrieval (in which you force the retrieval of a specific chunk, and give up if that specific chunk isn't above the retrieval threshold). That is, you have to be willing to let activation play the role it's meant to play under rational analysis, and let it predict the need for a chunk based on history and context. If you do this, then on a given retrieval cycle the accuracy of the retrieval (in terms of probability of retrieving the target chunk) is determined by the chunk choice equation, which factors in the activation of all old chunks in memory. The more old chunks there are, the lower the probability of retrieving the correct one, and the greater the probability of going off on a tangent as a function of a mis-retrieval. This captures, at an abstract level, the general aging phenomenon of decreasing ability to inhibit irrelevant information. This kind of forgetting model (in which the main factors are interference and decay and not so much the retrieval threshold) also seems quite general, with successful applications to the Tower of Hanoi, task switching, order memory, Brown-Peterson, and the Waugh and Norman probe digit task. Some of these applications involve long-term as well as short-term memory. Some other pieces to the puzzle. First, if cognitive aging is represented this way, the existence of vast numbers of old chunks is counterbalanced by the fact that each individual one is highly decayed and so contributes little to the denominator of the chunk choice equation. So the basic idea seems structurally plausible -- background noise in the head increases gradually over a lifetime. Second, to approximate the effect of tens or hundreds of millions of chunks in memory, one can simply increase activation noise (s). The decline in accuracy (as governed by the chunk choice equation) follows a slightly different curve, but is still curvilinear and makes for a much more efficient simulation. In my order memory model, I use this trick to account for order reconstruction accuracy at a retention interval of 24 hours, in which chunks would ordinarily be added to memory at a rate of several per second (assuming that people don't turn their brains off when they're not thinking about your task). Third, the advantage of representing aging in terms of declarative memory filling up is that it actually specifies an aging mechanism, unlike twiddling W, for example. Erik. -- ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ Erik M. Altmann Department of Psychology Michigan State University East Lansing, MI 48824 517-353-4406 (voice) 517-353-1652 (fax) ema at msu.edu http://www.msu.edu/~ema ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ From reder at andrew.cmu.edu Thu Jan 4 19:30:43 2001 From: reder at andrew.cmu.edu (Lynne) Date: Thu, 4 Jan 2001 19:30:43 -0500 Subject: how can ACT-R models age? Message-ID: The work that Marsha, Christian, Larry Daily and I have been doing on modeling individual differences involves varying just W. Although we have not tried modeling aging differences, I've often thought about it and asked myself whether W might not explain much of the differences between younger and older people. I've thought that it would (adding the notion that poorer sensory processes mean that more cognition has to be allocated to encoding as well). Imagining that it will work and it really doing the job are different things, of course. So I'd be quite interested in hearing of your success or lack thereof of modeling these differences in performance by varying only W. --Lynne At 5:09 PM -0500 1/4/01, Dario Salvucci wrote: >Might anyone know the current status of work relating ACT-R and >aging? In particular, I'm wondering if anyone has done work toward >the following question: Given a "young expert" ACT-R model, is there >a general (domain-independent) way of making it an "elderly" model >simply by changing appropriate parameters? For instance, one might >imagine that cycle time increases by some percentage causing general >slowdown (there seems to be EPIC work suggesting this), or that W >decreases, and/or that the latency of certain perceptual-motor >parameters increases. I'm specifically interested in modeling >elderly drivers using an existing model of younger drivers, but I'm >hoping to carry over any related results / parameter changes from >other domains if at all possible. > >Thanks, and best wishes for the new year, >Dario > >-------------------------------- >Dario Salvucci >Cambridge Basic Research >Email: dario at cbr.com >Info: http://www.cbr.com/~dario -- __________________________________________________________ Lynne M. Reder, Professor Department of Psychology Carnegie Mellon University Pittsburgh, PA 15213 phone: (412)268-3792 fax: (412) 268-2844 email: reder at cmu.edu URL: http://www.andrew.cmu.edu/~reder/reder.html From pshefler at salsgiver.com Thu Jan 4 19:37:51 2001 From: pshefler at salsgiver.com (Peter Shefler) Date: Thu, 4 Jan 2001 19:37:51 -0500 Subject: Need help in unsubscribing Message-ID: removed from the list. I am unable to find the information anywhere. Thanks in advance, Peter Shefler From reder at andrew.cmu.edu Thu Jan 4 21:04:03 2001 From: reder at andrew.cmu.edu (Lynne) Date: Thu, 4 Jan 2001 21:04:03 -0500 Subject: how can ACT-R models age? Message-ID: yet arrived in my mailbox. I might have been inhibited to mention it given his comments; however, given his cavalier dismissal of my view (I would not be surprised if CL, ML and LD agree, but I can not speak for them), I am motivated to comment further. My impression of aging effects is that the *biggest* liability of aging is in acquiring new information not retrieving old information--this occurs with poor encoding of simple facts, but is most pronounced in learning new skills and concepts. It seems that older individuals are most handicapped in domains where their prior knowledge is of least value, e.g., learning new technologies or video games. It is not obvious to me how Erik's filling up the brain with too many chunks would predict that general pattern/problem with aging. My hunch (and again, since it has not been simulated it is only a hunch) is that W would do a better job of explaining that aspect of the demise of intellect with age. Erik's other remark was that his proposal was a natural consequence of aging, while W is a free parameter. Well, it would at least be constrained to go down with age, not up. Moreover, it is a single parameter, while my reading of Erik's proposal involved twiddling several parameters, but perhaps I'm mistaken (low W and lots of decayed chunks due to my advanced age, or at least, that's my excuse). --L. p.s. Since Bush's coronation by the Supreme Court, I've wanted to change the name of the parameter to something other than W. At 6:51 PM -0800 1/4/01, Erik M. Altmann wrote: >At 5:09 PM -0500 1/4/01, Dario Salvucci wrote: >>Might anyone know the current status of work relating ACT-R and >>aging? In particular, I'm wondering if anyone has done work toward >>the following question: Given a "young expert" ACT-R model, is >>there a general (domain-independent) way of making it an "elderly" >>model simply by changing appropriate parameters? For instance, one >>might imagine that cycle time increases by some percentage causing >>general slowdown (there seems to be EPIC work suggesting this), or >>that W decreases, and/or that the latency of certain >>perceptual-motor parameters increases. I'm specifically interested >>in modeling elderly drivers using an existing model of younger >>drivers, but I'm hoping to carry over any related results / >>parameter changes from other domains if at all possible. > >I've been thinking about a representation of age in which the brain >fills up with chunks that aren't completely decayed. For this to >have typical aging effects, I believe you have to dispense with the >retrieval threshold and particularly with indexed retrieval (in >which you force the retrieval of a specific chunk, and give up if >that specific chunk isn't above the retrieval threshold). That is, >you have to be willing to let activation play the role it's meant to >play under rational analysis, and let it predict the need for a >chunk based on history and context. If you do this, then on a given >retrieval cycle the accuracy of the retrieval (in terms of >probability of retrieving the target chunk) is determined by the >chunk choice equation, which factors in the activation of all old >chunks in memory. The more old chunks there are, the lower the >probability of retrieving the correct one, and the greater the >probability of going off on a tangent as a function of a >mis-retrieval. This captures, at an abstract level, the general >aging phenomenon of decreasing ability to inhibit irrelevant >information. This kind of forgetting model (in which the main >factors are interference and decay and not so much the retrieval >threshold) also seems quite general, with successful applications to >the Tower of Hanoi, task switching, order memory, Brown-Peterson, >and the Waugh and Norman probe digit task. Some of these >applications involve long-term as well as short-term memory. > >Some other pieces to the puzzle. First, if cognitive aging is >represented this way, the existence of vast numbers of old chunks is >counterbalanced by the fact that each individual one is highly >decayed and so contributes little to the denominator of the chunk >choice equation. So the basic idea seems structurally plausible -- >background noise in the head increases gradually over a lifetime. >Second, to approximate the effect of tens or hundreds of millions of >chunks in memory, one can simply increase activation noise (s). The >decline in accuracy (as governed by the chunk choice equation) >follows a slightly different curve, but is still curvilinear and >makes for a much more efficient simulation. In my order memory >model, I use this trick to account for order reconstruction accuracy >at a retention interval of 24 hours, in which chunks would >ordinarily be added to memory at a rate of several per second >(assuming that people don't turn their brains off when they're not >thinking about your task). Third, the advantage of representing >aging in terms of declarative memory filling up is that it actually >specifies an aging mechanism, unlike twiddling W, for example. > >Erik. > >-- > >~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ >Erik M. Altmann >Department of Psychology >Michigan State University >East Lansing, MI 48824 >517-353-4406 (voice) >517-353-1652 (fax) >ema at msu.edu >http://www.msu.edu/~ema >~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ -- __________________________________________________________ Lynne M. Reder, Professor Department of Psychology Carnegie Mellon University Pittsburgh, PA 15213 phone: (412)268-3792 fax: (412) 268-2844 email: reder at cmu.edu URL: http://www.andrew.cmu.edu/~reder/reder.html From gray at gmu.edu Fri Jan 5 09:42:37 2001 From: gray at gmu.edu (Wayne Gray) Date: Fri, 5 Jan 2001 09:42:37 -0500 Subject: how can ACT-R models age? Message-ID: Content-Type: text/plain; charset="us-ascii" ; format="flowed" Interesting comments. Given the "filling up" (Altmann) versus "slowing down" (AKA "dubya") dichotomy, it sounds to me as if we need to hear from someone who has some data on what actually does happen with age. This is an area that I do NOT follow closely. From a distance it appears that not too many cognitive scientists are actually involved in the cognition of aging. (The area seems to be largely dominated by the "individual difference" crowd, where ID is determined not by careful analysis of individual performance, but by administering standardized test of dubious construct validity to large numbers of people and then statistically torturing the data until some significant correlation is found.) One of the nicest treatments of the interactions of the actr mind is the one provided by Marsha, Lynne, and Christian in their excellent chapter in the Mikaye & Shah volume: Lovett, M. C., Reder, L. M., & Lebiere, C. (1999). Modeling working memory in a unified architecture: An ACT-R perspective. In A. Miyake & P. Shah (Eds.), Models of working memory: Mechanisms of active maintenance & executive control (pp. 135-182). New York: Cambridge University Press. I believe we could get the effect of decreasing W AND increasing number of DMEs from adding more slots to the goal chunk. That would account for retrieval difficulties and, I think, learning difficulties. I also believe that the various interactions that LRL'99 discuss would be sensitive to an increasing number of DMEs clogging up the system. On the other hand I find the notion that there is a set of general system parameters that change with age to have intuitive appeal. On the physicial level it certainly appears that children are more physically active than twenty-somethings who are in turn more physically active than fifty-somethings who are in turn more physically active than 80-somethings. But, on the third hand, I like to keep in mind the statement that Dan Dennett likes to repeat, "For every complex problem, there is a simple answer, and it is wrong." Cheers, Wayne -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Wayne D. Gray, Program Director HUMAN FACTORS & APPLIED COGNITIVE PROGRAM SNAIL-MAIL ADDRESS (FedX et al) VOICE: +1 (703) 993-1357 George Mason University FAX: +1 (703) 993-1330 ARCH Lab/HFAC Program ********************* MSN 3f5 * Work is infinite, * Fairfax, VA 22030-4444 * time is finite, * http://hfac.gmu.edu * plan accordingly. * _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ --============_-1233415675==_ma============ Content-Type: text/html; charset="us-ascii" Re: how can ACT-R models age?
Interesting comments.

Given the "filling up" (Altmann) versus "slowing down" (AKA "dubya") dichotomy, it sounds to me as if we need to hear from someone who has some data on what actually does happen with age. This is an area that I do NOT follow closely. From a distance it appears that not too many cognitive scientists are actually involved in the cognition of aging. (The area seems to be largely dominated by the "individual difference" crowd, where ID is determined not by careful analysis of individual performance, but by administering standardized test of dubious construct validity to large numbers of people and then statistically torturing the data until some significant correlation is found.)

One of the nicest treatments of the interactions of the actr mind is the one provided by Marsha, Lynne, and Christian in their excellent chapter in the Mikaye & Shah volume:

Lovett, M. C., Reder, L. M., & Lebiere, C. (1999). Modeling working memory in a unified architecture: An ACT-R perspective. In A. Miyake & P. Shah (Eds.), Models of working memory: Mechanisms of active maintenance & executive control (pp. 135-182). New York: Cambridge University Press.

I believe we could get the effect of decreasing W AND increasing number of DMEs from adding more slots to the goal chunk. That would account for retrieval difficulties and, I think, learning difficulties. I also believe that the various interactions that LRL'99 discuss would be sensitive to an increasing number of DMEs clogging up the system.

On the other hand I find the notion that there is a set of general system parameters that change with age to have intuitive appeal. On the physicial level it certainly appears that children are more physically active than twenty-somethings who are in turn more physically active than fifty-somethings who are in turn more physically active than 80-somethings. But, on the third hand, I like to keep in mind the statement that Dan Dennett likes to repeat, "For every complex problem, there is a simple answer, and it is wrong."

Cheers,

Wayne
--
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
Wayne D. Gray, Program Director
HUMAN FACTORS & APPLIED COGNITIVE PROGRAM

SNAIL-MAIL ADDRESS (FedX et al)     VOICE: +1 (703) 993-1357
George Mason University               FAX: +1 (703) 993-1330
ARCH Lab/HFAC Program                           *********************
MSN 3f5                                              *   Work is infinite,   *
Fairfax, VA  22030-4444                     *   time is finite,        *
http://hfac.gmu.edu                             *   plan accordingly. *
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
--============_-1233415675==_ma============-- From ema at msu.edu Fri Jan 5 16:13:23 2001 From: ema at msu.edu (Erik M. Altmann) Date: Fri, 5 Jan 2001 13:13:23 -0800 Subject: how can ACT-R models age? Message-ID: At 9:04 PM -0500 1/4/01, Lynne wrote: >My impression of aging effects is that the *biggest* liability of >aging is in acquiring new information not retrieving old >information--this occurs with poor encoding of simple facts, but >is most pronounced in learning new skills and concepts. It seems >that older individuals are most handicapped in domains where their >prior knowledge is of least value, e.g., learning new technologies >or video games. It is not obvious to me how Erik's filling up the >brain with too many chunks would predict that general >pattern/problem with >aging. My hunch (and again, since it has not been simulated it is only a >hunch) is that W would do a better job of explaining that aspect of >the demise of intellect with age. I expect there are many liabilities to cognitive aging, and I can't claim to know which is the "biggest", but there is evidence that cognitive aging involves a decreased ability to inhibit irrelevant information -- that is, an increased susceptibility to interference. See May, Hasher and Kane (1999), for some recent examples, and Kane and Hasher (1995) for a review. >Erik's other remark was that his proposal was a natural consequence >of aging, while W is a free parameter. Well, it would at least be >constrained to go down with age, not up. Moreover, it is a single parameter, >while my reading of Erik's proposal involved twiddling several parameters, >but perhaps I'm mistaken (low W and lots of decayed chunks due to my >advanced age, or at least, that's my excuse). This comment raises a question about how to interpret model parameters theoretically. I'm not sure that number of parameters by itself is the best criterion for evaluating the success of a model. In the limit, a model with a single parameter set to "do the task" would not be particularly interesting. My concern about W, and also about partial matching, relates specifically to this point. As I see W and partial matching often used, they redescribe the data rather than explaining the data in terms of underlying or more complex processes. I have little invested in my aging proposal (as yet!) and don't care to be dogmatic about it, but it does serve as a counterexample in that it involves multiple parameters. Of course it involves multiple parameters, not to mention multiple assumptions -- it's a process account, and any complex process will involve many degrees of freedom. This is not by itself a drawback. The more interesting question is how well those degrees of freedom are constrained, both externally by other data and theory and internally by mutual constraint among mechanisms, and what questions they raise to drive further research. In searching for constraints on this model, I would start by fixing the decay rate at d =0.5, which has worked well now in many models. The big question would probably be the rate at which chunks are added to memory. There's evidence from many sources, including the ACT-R Argus models being developed at GMU, a slough of models in Soar, Logan's instance theory, etc., that elements are added to memory at a relatively constant rate -- the "fecundity" of learning mechanisms leads to pervasive acquisition of episodic memory. So, pick a constant rate. But then, what's the effect of consolidation, say during sleep? Is this what causes the decay rate to "slow down" as John et al recently documented? More generally, what's the physiology of decay? And what's the interaction with cues spreading source activation to really old chunks, reviving those old memories and feeding back into their retrieval history? These questions come up because the account is a process model with lots of parameters and assumptions. Stopping with W, like explaining semantic gradients with partial matching, cuts off discussion of how the history and context of the system might produce or moderate the effect of interest. Cheers, Erik. May, Hasher, & Kane (1999). The role of interference in memory span. Memory & Cognition, 27, 759-767. Kane & Hasher (1995). Interference. In Maddox (ed.), Encyclopedia of aging (pp 514-516). -- ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ Erik M. Altmann Department of Psychology Michigan State University East Lansing, MI 48824 517-353-4406 (voice) 517-353-1652 (fax) ema at msu.edu http://www.msu.edu/~ema ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ From card at parc.xerox.com Fri Jan 5 14:17:38 2001 From: card at parc.xerox.com (Card, Stuart K ) Date: Fri, 5 Jan 2001 11:17:38 PST Subject: how can ACT-R models age? Message-ID: memory with more chunks improves some performance at the same ages that we could also detect decline in some kinds of performance with age. An aside: Now I think it is an interesting challenge to models like ACT-R or SOAR about the interference you could get if you left knowledge from previous experiments in them. It has always seemed unfair that these models are evaluated starting from zeroed out memory (unlike humans). Supposed you had to use the same ACT-R model for the next experiment that you used in the last experiment and you weren't allowed to remove any chunks (of course, any natural decay process in them could continue to work). Maybe the models would "age" drastically in this way, whereas the people modeled wouldn't. -Stu Card -----Original Message----- From: Erik M. Altmann [mailto:ema at msu.edu] Sent: Friday, January 05, 2001 1:13 PM To: act-r-users at andrew.cmu.edu Subject: Re: how can ACT-R models age? At 9:04 PM -0500 1/4/01, Lynne wrote: >My impression of aging effects is that the *biggest* liability of >aging is in acquiring new information not retrieving old >information--this occurs with poor encoding of simple facts, but >is most pronounced in learning new skills and concepts. It seems >that older individuals are most handicapped in domains where their >prior knowledge is of least value, e.g., learning new technologies >or video games. It is not obvious to me how Erik's filling up the >brain with too many chunks would predict that general >pattern/problem with >aging. My hunch (and again, since it has not been simulated it is only a >hunch) is that W would do a better job of explaining that aspect of >the demise of intellect with age. I expect there are many liabilities to cognitive aging, and I can't claim to know which is the "biggest", but there is evidence that cognitive aging involves a decreased ability to inhibit irrelevant information -- that is, an increased susceptibility to interference. See May, Hasher and Kane (1999), for some recent examples, and Kane and Hasher (1995) for a review. >Erik's other remark was that his proposal was a natural consequence >of aging, while W is a free parameter. Well, it would at least be >constrained to go down with age, not up. Moreover, it is a single parameter, >while my reading of Erik's proposal involved twiddling several parameters, >but perhaps I'm mistaken (low W and lots of decayed chunks due to my >advanced age, or at least, that's my excuse). This comment raises a question about how to interpret model parameters theoretically. I'm not sure that number of parameters by itself is the best criterion for evaluating the success of a model. In the limit, a model with a single parameter set to "do the task" would not be particularly interesting. My concern about W, and also about partial matching, relates specifically to this point. As I see W and partial matching often used, they redescribe the data rather than explaining the data in terms of underlying or more complex processes. I have little invested in my aging proposal (as yet!) and don't care to be dogmatic about it, but it does serve as a counterexample in that it involves multiple parameters. Of course it involves multiple parameters, not to mention multiple assumptions -- it's a process account, and any complex process will involve many degrees of freedom. This is not by itself a drawback. The more interesting question is how well those degrees of freedom are constrained, both externally by other data and theory and internally by mutual constraint among mechanisms, and what questions they raise to drive further research. In searching for constraints on this model, I would start by fixing the decay rate at d =0.5, which has worked well now in many models. The big question would probably be the rate at which chunks are added to memory. There's evidence from many sources, including the ACT-R Argus models being developed at GMU, a slough of models in Soar, Logan's instance theory, etc., that elements are added to memory at a relatively constant rate -- the "fecundity" of learning mechanisms leads to pervasive acquisition of episodic memory. So, pick a constant rate. But then, what's the effect of consolidation, say during sleep? Is this what causes the decay rate to "slow down" as John et al recently documented? More generally, what's the physiology of decay? And what's the interaction with cues spreading source activation to really old chunks, reviving those old memories and feeding back into their retrieval history? These questions come up because the account is a process model with lots of parameters and assumptions. Stopping with W, like explaining semantic gradients with partial matching, cuts off discussion of how the history and context of the system might produce or moderate the effect of interest. Cheers, Erik. May, Hasher, & Kane (1999). The role of interference in memory span. Memory & Cognition, 27, 759-767. Kane & Hasher (1995). Interference. In Maddox (ed.), Encyclopedia of aging (pp 514-516). -- ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ Erik M. Altmann Department of Psychology Michigan State University East Lansing, MI 48824 517-353-4406 (voice) 517-353-1652 (fax) ema at msu.edu http://www.msu.edu/~ema ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ From carol.raye at yale.edu Fri Jan 5 15:25:10 2001 From: carol.raye at yale.edu (Carol L. Raye) Date: Fri, 05 Jan 2001 15:25:10 -0500 Subject: how can ACT-R models age? Message-ID: episodic memory, so I will throw in 2 cents. In response to what Erik wrote, I have to say, I don't think "the fecundity of learning mechanisms leads to pervasive acquisition of episodic memory" is a description of older adults. Episodic memory is event memory and requires that one bind together in memory the features that make up the event. We have evidence that older adults have a binding deficit in encoding episodic memories. For example, in a working memory task they are less able than young adults to bind the features of an event together into an episodic memory, even when they recognize single features at the same level as young adults. Our fMRI data for this task is also consistent with an encoding deficit--less anterior hippocampal activation in older adults in the feature binding condition. Older adults also show deficits in many source memory tasks (e.g., which of 2 speakers said what)and again we have evidence of poorer encoding. In addition to changes in underlying brain mechanisms, we have some evidence that older adults may attend to different information, e.g., attend less to perceptual features and more to emotional than young adults--and they show some benefit from instructions to focus on perceptual information. In short, I agree with Lynne Reder that acquiring new information is a significant problem with aging. With respect to Lynn Hasher and colleagues' proposal that older adults have a general failure in inhibitory mechanisms, I found it appealing but when I last read the work, I did not find the evidence as compelling as I would have liked. Below are two recent references from our lab on binding and aging. Mitchell, K. J., Johnson, M. K., Raye, C. L., Mather, M., & D'Esposito, M. (2000). Aging and reflective processes of working memory: Binding and test load deficits. Psychology and Aging, 15, 527-541. Mitchell, K.J., Johnson, M.K., Raye, C.L., & D'Esposito, M. (2000). fMRI evidence of age-related hippocampal dysfunction in feature binding in working memory. Cognitive Brain Research, 10, 197-206. Erik M. Altmann wrote: > > At 9:04 PM -0500 1/4/01, Lynne wrote: > > >My impression of aging effects is that the *biggest* liability of > >aging is in acquiring new information not retrieving old > >information--this occurs with poor encoding of simple facts, but > >is most pronounced in learning new skills and concepts. It seems > >that older individuals are most handicapped in domains where their > >prior knowledge is of least value, e.g., learning new technologies > >or video games. It is not obvious to me how Erik's filling up the > >brain with too many chunks would predict that general > >pattern/problem with > >aging. My hunch (and again, since it has not been simulated it is only a > >hunch) is that W would do a better job of explaining that aspect of > >the demise of intellect with age. > > I expect there are many liabilities to cognitive aging, and I can't > claim to know which is the "biggest", but there is evidence that > cognitive aging involves a decreased ability to inhibit irrelevant > information -- that is, an increased susceptibility to interference. > See May, Hasher and Kane (1999), for some recent examples, and Kane > and Hasher (1995) for a review. > > >Erik's other remark was that his proposal was a natural consequence > >of aging, while W is a free parameter. Well, it would at least be > >constrained to go down with age, not up. Moreover, it is a single parameter, > >while my reading of Erik's proposal involved twiddling several parameters, > >but perhaps I'm mistaken (low W and lots of decayed chunks due to my > >advanced age, or at least, that's my excuse). > > This comment raises a question about how to interpret model > parameters theoretically. I'm not sure that number of parameters by > itself is the best criterion for evaluating the success of a model. > In the limit, a model with a single parameter set to "do the task" > would not be particularly interesting. My concern about W, and also > about partial matching, relates specifically to this point. As I see > W and partial matching often used, they redescribe the data rather > than explaining the data in terms of underlying or more complex > processes. > > I have little invested in my aging proposal (as yet!) and don't care > to be dogmatic about it, but it does serve as a counterexample in > that it involves multiple parameters. Of course it involves multiple > parameters, not to mention multiple assumptions -- it's a process > account, and any complex process will involve many degrees of > freedom. This is not by itself a drawback. The more interesting > question is how well those degrees of freedom are constrained, both > externally by other data and theory and internally by mutual > constraint among mechanisms, and what questions they raise to drive > further research. > > In searching for constraints on this model, I would start by fixing > the decay rate at d =0.5, which has worked well now in many models. > The big question would probably be the rate at which chunks are added > to memory. There's evidence from many sources, including the ACT-R > Argus models being developed at GMU, a slough of models in Soar, > Logan's instance theory, etc., that elements are added to memory at a > relatively constant rate -- the "fecundity" of learning mechanisms > leads to pervasive acquisition of episodic memory. So, pick a > constant rate. But then, what's the effect of consolidation, say > during sleep? Is this what causes the decay rate to "slow down" as > John et al recently documented? More generally, what's the > physiology of decay? And what's the interaction with cues spreading > source activation to really old chunks, reviving those old memories > and feeding back into their retrieval history? These questions come > up because the account is a process model with lots of parameters and > assumptions. Stopping with W, like explaining semantic gradients > with partial matching, cuts off discussion of how the history and > context of the system might produce or moderate the effect of > interest. > > Cheers, > > Erik. > > May, Hasher, & Kane (1999). The role of interference in memory span. > Memory & Cognition, 27, 759-767. > > Kane & Hasher (1995). Interference. In Maddox (ed.), Encyclopedia > of aging (pp 514-516). > -- > > ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ > Erik M. Altmann > Department of Psychology > Michigan State University > East Lansing, MI 48824 > 517-353-4406 (voice) > 517-353-1652 (fax) > ema at msu.edu > http://www.msu.edu/~ema > ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ -- Carol L. Raye Sr. Research Scientist Dept. of Psychology Yale University P.O. Box 208205 New Haven, CT 06520 Phone: 203.432.6762 Email: carol.raye at yale.edu From bej at cs.cmu.edu Fri Jan 5 16:18:46 2001 From: bej at cs.cmu.edu (Bonnie E. John) Date: Fri, 05 Jan 2001 16:18:46 -0500 Subject: how can ACT-R models age? Message-ID: >An aside: Now I think it is an interesting challenge to models like ACT-R >or SOAR about the interference you could get if you left knowledge from >previous experiments in them. It has always seemed unfair that these >models are evaluated starting from zeroed out memory (unlike >humans). Supposed you had to use the same ACT-R model for the next >experiment that you used in the last experiment and you weren't allowed to >remove any chunks (of course, any natural decay process in them could >continue to work). I agree with this Stu, and have made it part of my research program. It's exactly what Yannick Lallement and I did with the ATC-Soar model -- it had the hand-written and learned productions from the Wicken's task loaded before it started the ATC task. much of that knowledge didn't get in the way because Wickens-task-specific knowledge had a test for being in the Wickens task, so it never fired in the ATC task. Some may say that is cheating, but boy those screens sure looked soooooo different that it wasn't a stretch for me to beleive that enough environmental cues would have been associated with any Wickens' rule that differentiated it from the ATC environment that it would never fire in the ATC environment -- and isn't what underlies the difficulty that people have with transfering knowledge from context to context? We have a new paper submitted on this work: Experiences building a zero-parameter model that learns to perform a complex, dynamic, computer-based task, Bonnie E. John & Yannick Lallement We also did that years ago with NL-Soar and the NASA Test Director model (NTD-Soar). Boy was that model bloated -- having all that NL code in with the NTD stuff. There was no NTD test, because NL was needed in that task and we always thought of it as a general capability that shouldn't be task-related. But the words were so different in the NTD task that all that knowledge about other words didn't get in the way. You can read about that in Nelson, G. H., Lehman, J. F., & John, B. E. (1994) Integrating cognitive capabilities in a real-time task. Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society, August 1994. pp. 353-358. Nelson, G., Lehman, J. F., John, B. E. (1994) Experiences in interruptible language processing, In Proceedings of the 1994 AAAI Spring Symposium on Active Natural Language Processing, 1994. From ja+ at cmu.edu Sun Jan 7 20:02:01 2001 From: ja+ at cmu.edu (John Anderson) Date: Sun, 7 Jan 2001 20:02:01 -0500 Subject: post-doctoral position Message-ID: Content-Type: text/plain; charset="us-ascii" ; format="flowed" We have a post-doctoral position in the ACT-R project. ACT-R is a cognitive architecture used to model a wide range of cognitive functions. Current research in the laboratory includes: (a) Modeling complex skill acquisition, particularly as this involves learning from instruction. (b) Instruction of dynamic problem solving, particularly as this involves use of eye movements to infer learning state. (c) fMRI studies of complex tasks, relating activation patterns to components of the ACT-R architecture. If interested please contact John Anderson (ja at cmu.edu) with a vita and a statement of interests. -- ========================================================== John R. Anderson Carnegie Mellon University Pittsburgh, PA 15213 Phone: 412-268-2788 Fax: 412-268-2844 email: ja at cmu.edu URL: http://act.psy.cmu.edu/ --============_-1233206773==_ma============ Content-Type: text/html; charset="us-ascii" post-doctoral position
We have a post-doctoral position in the ACT-R project.  ACT-R is a cognitive architecture used to model a wide range of cognitive functions.  Current research in the laboratory includes:

    (a) Modeling complex skill acquisition, particularly as this involves learning from instruction.
   (b) Instruction of dynamic problem solving, particularly as this involves use of eye movements to infer learning state.
   (c) fMRI studies of complex tasks, relating activation patterns to components of the ACT-R architecture.

If interested please contact John Anderson (ja at cmu.edu) with a vita and a statement of interests.
--
==========================================================

John R. Anderson
Carnegie Mellon University
Pittsburgh, PA 15213

Phone: 412-268-2788
Fax:     412-268-2844
email: ja at cmu.edu
URL:  http://act.psy.cmu.edu/
--============_-1233206773==_ma============-- From r.m.young at herts.ac.uk Mon Jan 8 06:24:40 2001 From: r.m.young at herts.ac.uk (Richard M Young) Date: Mon, 8 Jan 2001 11:24:40 +0000 Subject: how can ACT-R models age? Message-ID: modelling of cognitive ageing effects was the topic of Mike Byrne's PhD, I'm pretty sure, done under Tim Salthouse's supervision. Mike may be currently incommunicado and otherwise occupied, but it would be interesting to seek his comments in due course. Also, at the other end of the age scale, Gary Jones at Nottingham did his PhD on ACT modelling of children-vs-adult performance on a cognitive task. I'm not in a position to summarise the main mechanisms he used, but I think it did centrally involve the noise parameter(s). Again, his comments might be of help. Happy new year, -- Richard From Hongbin.Wang at uth.tmc.edu Tue Jan 9 00:42:54 2001 From: Hongbin.Wang at uth.tmc.edu (Hongbin Wang) Date: Mon, 8 Jan 2001 23:42:54 -0600 Subject: stroop effect Message-ID: I do not recall whether anyone has done an Act-R model for the stroop effect (or other similar tasks that involve inhibition, such as schizophrenia). Clues and comments are welcome. Related to the aging topic, evidence has shown that young adults have better executive control and inhibition ability than infants and older adults do. On the other hand, while people show individual differences, twin studies have suggested that the difference between people in executive control is highly heritable. Thanks. Hongbin Wang Hongbin.wang at uth.tmc.edu From magerko at umich.edu Tue Jan 9 09:53:51 2001 From: magerko at umich.edu (Fisherman) Date: Tue, 9 Jan 2001 09:53:51 -0500 (EST) Subject: stroop effect Message-ID: I actually did a Stroop model for John's class as an undergrad. if you'd like the paper I wrote on it, I can try and dig it up for you. _brian On Mon, 8 Jan 2001, Hongbin Wang wrote: > Greetings. > > I do not recall whether anyone has done an Act-R model for the stroop effect > (or other similar tasks that involve inhibition, such as schizophrenia). > Clues and comments are welcome. > > Related to the aging topic, evidence has shown that young adults have better > executive control and inhibition ability than infants and older adults do. > On the other hand, while people show individual differences, twin studies > have suggested that the difference between people in executive control is > highly heritable. > > Thanks. > > Hongbin Wang > Hongbin.wang at uth.tmc.edu > From dario at cbr.com Wed Jan 10 11:55:45 2001 From: dario at cbr.com (Dario Salvucci) Date: Wed, 10 Jan 2001 11:55:45 -0500 (EST) Subject: how can ACT-R models age? Message-ID: It seems that if one subscribes to the view that we can "age" models by varying parameters (ignoring issues of changes in qualitative / strategic knowledge), the game is really to identify a parsimonious minimal set of parameters that change and to specify how they change over time. Given the nice work on W, this seems to be the best candidate so far for inclusion in this set. Several parameters for noise and perceptual-motor behavior would also be likely candidates (e.g., time to move the hands, or ability to encode visual objects in the periphery). As always, we'd expect to run into tradeoffs between specifying a smaller set of more general parameters (e.g., cycle time, noise) and specifying a larger set of less general parameters that provides greater complexity but also greater explanatory power. By the way, the same approach could apply to other aspects of physical state, mood, etc. -- for instance, can we make our models tired? drunk? aggressive? Plenty of research opportunities here... Dario -------------------------------- Dario Salvucci Cambridge Basic Research Email: dario at cbr.com Info: http://www.cbr.com/~dario From G.Jones at derby.ac.uk Thu Jan 11 06:31:54 2001 From: G.Jones at derby.ac.uk (Gary Jones) Date: Thu, 11 Jan 2001 11:31:54 +0000 Subject: how can ACT-R models age? Message-ID: > >It seems that if one subscribes to the view that we can "age" models >by varying parameters (ignoring issues of changes in qualitative / >strategic knowledge), the game is really to identify a parsimonious >minimal set of parameters that change and to specify how they change >over time. hello Dario, sorry I'm joining this discussion a bit late in the day, but as Richard has mentioned I have looked at ageing in children. I implemented several developmental mechanisms within an ACT-R model, some of which mapped onto ACT-R parameters. In particular, the one that gave the best results was EGN (this is ACTv3.0). Starting with a model of adults with no noise, and increasing the level of noise (thereby making it more difficult for the model to select the *right* strategy), the model then fit the data of 7yo's quite well. This mechanism (strategy choice efficiency) has been suggested by Siegler in numerous papers. You could look at them or alternatively look at our March 2000 Psychological Science paper. Gary. From reder at andrew.cmu.edu Thu Jan 11 17:27:53 2001 From: reder at andrew.cmu.edu (Lynne Reder) Date: Thu, 11 Jan 2001 17:27:53 -0500 Subject: how can ACT-R models age? Message-ID: "aging effects," it almost certainly is at its maximum in one's youth and probably would not start to diminish until a person's mid 20s... At 11:31 AM +0000 1/11/01, Gary Jones wrote: >>It seems that if one subscribes to the view that we can "age" models >>by varying parameters (ignoring issues of changes in qualitative / >>strategic knowledge), the game is really to identify a parsimonious >>minimal set of parameters that change and to specify how they change >>over time. > > >hello Dario, > >sorry I'm joining this discussion a bit late in the day, but as >Richard has mentioned I have looked at ageing in children. I >implemented several developmental mechanisms within an ACT-R model, >some of which mapped onto ACT-R parameters. In particular, the one >that gave the best results was EGN (this is ACTv3.0). Starting with >a model of adults with no noise, and increasing the level of noise >(thereby making it more difficult for the model to select the >*right* strategy), the model then fit the data of 7yo's quite well. >This mechanism (strategy choice efficiency) has been suggested by >Siegler in numerous papers. You could look at them or alternatively >look at our March 2000 Psychological Science paper. > >Gary. From M.R.Dekker at ppsw.rug.nl Tue Jan 16 04:31:46 2001 From: M.R.Dekker at ppsw.rug.nl (Mark Dekker) Date: Tue, 16 Jan 2001 10:31:46 +0100 Subject: how can ACT-R models age? Message-ID: ram>Very interesting discussion, Dario and respondents! After the 'a bit late' responses now 'a very late' respons. Maybe this is = a open door I am kicking in, but for me this issue is not so clear yet. I think that when comparing age groups and manipulating a model of a =91task A=92 to it accordingly, those manipulations should be theoretical= ly plausible in there representation of what happens in consequence of aging. So now the assumption is made about how aging is represented in the model. When subsequently (the same) age groups are topic in the modeling on another =91task B=92, we should use those manipulations of which it was earlier assumed to represent the age difference. If we were correct, in this way we should be able to capture the age differences or similarities in behavior this task. I think this idea relates in general way to the work on individual differences done by Marsha, Christian, Larry Daily and Lynne Reder, in the sense that the modeling of one task is being used to explain or even predict the data on another task. My question would be: Is it possible to capture by this approach (by means changes of parameter settings) the more strategic differences that are found in behavior between different age groups? A recent example of these qualitative/strategic differences has been found (Nieuwenhuis etal 2000 in Psychology and Aging, vol. 15(4)) in using a antisaccade task. Elderly compared to younger adults were found to rely more on external cues when the task at hand provides that possibility (in contrast to a endogenous manner of control which is another possibility to rely on). But when the possibility to rely on external cues is removed, the voluntary endogenous manner of control now is being used consistently by the elderly. Altough this maybe is not the most hard finding, other studies have showed qualitative differences also. This paper is, though, also interesting with respect to the issue of inhibition and aging which popped up. A more general discussion about this in Journals of Gerontology: Series B (1997, vol.52B(6)) in papers by McDowd, by Burke and by Zacks&Hasher. The idea that aging groups can differ in how a task is performed is in my opinion also reinforced by neuroimaging studies that find differences between age groups in patterns of activation when performing working memory tasks. Again, I wonder whether manipulating the continuous parameters can result qualitative/strategic differences which are found between age groups. Naturally this question not only applies to aging but also to other sources of inter-individual differences and intra-individual differences (eg fatigue) as Dario was pointing out. Mark Dekker ---------------------------------- drs. Mark R. Dekker Experimentele & arbeidspsychologie Rijksuniversiteit Groningen Grote Kruisstraat 2/1 9712 TS Groningen The Netherlands tel: +31 (0)50 3636346 (work) +31 (0)50 5733932 (home) fax: +31 (0)20 7778112 e-mail: m.r.dekker at ppsw.rug.nl ---------------------------------- From reder at andrew.cmu.edu Tue Jan 16 07:41:35 2001 From: reder at andrew.cmu.edu (Lynne) Date: Tue, 16 Jan 2001 07:41:35 -0500 Subject: how can ACT-R models age? Message-ID: Content-Type: text/plain; charset="iso-8859-1" ; format="flowed" Content-Transfer-Encoding: quoted-printable Mark, A number of us (Marsha Lovett, Chris Schunn and I) have shown that strategy choice will vary as aa function of success with a strategy, both within and across individuals. Because a parameter change may affect probability of success with a strategy, it is plausible to assume that strategy choice will also change. Reder (1982 and 1986; Psych Rev and Cog Psych, respectively) showed changes in strategy choice as a function of delay (that caused success of retrieval to shift) and base rate manipulations; Reder, Wible and Martin (1987?? in JEP:LMC) showed that older subjects were more prone to use plausibility than retrieval regardless of the delay. More recent work by Marsha, Chris & I in various combinations of these authors have shown strategy choice variations within individuals and tendency to adapt to also vary across individuals. So maybe this can be done. --Lynne At 10:31 AM +0100 1/16/01, Mark Dekker wrote: >Very interesting discussion, Dario and respondents! >After the 'a bit late' responses now 'a very late' respons. Maybe >this is a open door I am kicking in, but for me this issue is not so >clear yet. > >I think that when comparing age groups and manipulating a model of a >=91task A=92 to it accordingly, those manipulations should be >theoretically plausible in there representation of what happens in >consequence of aging. So now the assumption is made about how aging >is represented in the model. When subsequently (the same) age groups >are topic in the modeling on another =91task B=92, we should use those >manipulations of which it was earlier assumed to represent the age >difference. If we were correct, in this way we should be able to >capture the age differences or similarities in behavior this task. >I think this idea relates in general way to the work on individual >differences done by Marsha, Christian, Larry Daily and Lynne Reder, >in the sense that the modeling of one task is being used to explain >or even predict the data on another task. > >My question would be: >Is it possible to capture by this approach (by means changes of >parameter settings) the more strategic differences that are found in >behavior between different age groups? >A recent example of these qualitative/strategic differences has been >found (Nieuwenhuis etal 2000 in Psychology and Aging, vol. 15(4)) in >using a antisaccade task. Elderly compared to younger adults were >found to rely more on external cues when the task at hand provides >that possibility (in contrast to a endogenous manner of control >which is another possibility to rely on). But when the possibility >to rely on external cues is removed, the voluntary endogenous manner >of control now is being used consistently by the elderly. >Altough this maybe is not the most hard finding, other studies have >showed qualitative differences also. This paper is, though, also >interesting with respect to the issue of inhibition and aging which >popped up. A more general discussion about this in Journals of >Gerontology: Series B (1997, vol.52B(6)) in papers by McDowd, by >Burke and by Zacks&Hasher. > >The idea that aging groups can differ in how a task is performed is >in my opinion also reinforced by neuroimaging studies that find >differences between age groups in patterns of activation when >performing working memory tasks. >Again, I wonder whether manipulating the continuous parameters can >result qualitative/strategic differences which are found between age >groups. Naturally this question not only applies to aging but also >to other sources of inter-individual differences and >intra-individual differences (eg fatigue) as Dario was pointing out. > >Mark Dekker > > >---------------------------------- >drs. Mark R. Dekker >Experimentele & arbeidspsychologie >Rijksuniversiteit Groningen >Grote Kruisstraat 2/1 >9712 TS Groningen >The Netherlands > >tel: +31 (0)50 3636346 (work) > +31 (0)50 5733932 (home) >fax: +31 (0)20 7778112 >e-mail: m.r.dekker at ppsw.rug.nl >---------------------------------- -- __________________________________________________________ Lynne M. Reder, Professor Department of Psychology Carnegie Mellon University Pittsburgh, PA 15213 phone: (412)268-3792 fax: (412) 268-2844 email: reder at cmu.edu URL: http://www.andrew.cmu.edu/~reder/reder.html --============_-1232473598==_ma============ Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Re: how can ACT-R models age?
Mark,

A number of us (Marsha Lovett, Chris Schunn and I) have shown that strategy choice will vary as aa function of success with a strategy, both within and across individuals.  Because a parameter change may affect probability of success with a strategy, it is plausible to assume that strategy choice will also change.

Reder (1982 and 1986; Psych Rev and Cog Psych, respectively) showed changes in strategy choice as a function of delay (that caused success of retrieval to shift)
and base rate manipulations;  Reder, Wible and Martin (1987?? in JEP:LMC) showed that older subjects were more prone to use plausibility than retrieval regardless of the delay.   More recent work by Marsha, Chris & I in various combinations of these authors have shown strategy choice variations within individuals and tendency to adapt to also vary across individuals.

So maybe this can be done.

--Lynne

At 10:31 AM +0100 1/16/01, Mark Dekker wrote:
Very interesting discussion, Dario and respondents!
After the 'a bit late' responses now 'a very late' respons. Maybe this is a open door I am kicking in, but for me this issue is not so clear yet.

I think that when comparing age groups and manipulating a model of a =91task A=92 to it accordingly, those  manipulations should be theoretically plausible in there representation of what happens in consequence of aging. So now the assumption is made about how aging is represented in the model. When subsequently (the same) age groups are topic in the modeling on another =91task B=92, we should use those manipulations of which it was earlier assumed to represent the age difference. If we were correct, in this way we should be able to capture the age differences or similarities in behavior this task.
I think this idea relates in general way to the work on individual differences done by Marsha, Christian, Larry Daily and Lynne Reder, in the sense that the modeling of one task is being used to explain or even predict the data on another task.

My question would be:
Is it possible to capture by this approach (by means changes of parameter settings) the more strategic differences that are found in behavior between different age groups?
A recent example of these qualitative/strategic differences has been found (Nieuwenhuis etal 2000 in Psychology and Aging, vol. 15(4)) in using a antisaccade task.  Elderly compared to younger adults were found to rely more on external cues when the task at hand provides that possibility (in contrast to a endogenous manner of control which is another possibility to rely on). But when  the possibility to rely on external cues is removed, the voluntary endogenous manner of control now is being used consistently by the elderly.
Altough this maybe is not the most hard finding, other studies have showed qualitative differences also. This paper is, though, also interesting with respect to the issue of inhibition and aging which popped up. A more general discussion about this in  Journals of Gerontology: Series B (1997, vol.52B(6)) in papers by McDowd, by Burke and by Zacks&Hasher.

The idea that aging groups can differ in how a task is performed is in my opinion also reinforced by neuroimaging studies that find differences between age groups in patterns of activation when performing working memory tasks.
Again, I wonder whether manipulating the continuous parameters can result qualitative/strategic differences which are found between age groups. Naturally this question not only applies to aging but also to other sources of inter-individual differences and intra-individual differences (eg fatigue) as Dario was pointing out.

Mark Dekker


----------------------------------
drs. Mark R. Dekker
Experimentele & arbeidspsychologie
Rijksuniversiteit Groningen
Grote Kruisstraat 2/1
9712 TS Groningen
The Netherlands

tel: +31 (0)50 3636346 (work)
     +31 (0)50 5733932 (home)
fax: +31 (0)20 7778112
e-mail: m.r.dekker at ppsw.rug.nl
----------------------------------

--

__________________________________________________________
Lynne M. Reder, Professor
Department of Psychology
Carnegie Mellon University
Pittsburgh, PA 15213

phone:     (412)268-3792
fax:          (412) 268-2844
email:      reder at cmu.edu
URL:         http://www.andrew.cmu.edu/~reder/reder.html
--============_-1232473598==_ma============-- From seifert at umich.edu Tue Jan 16 11:45:29 2001 From: seifert at umich.edu (Colleen Seifert) Date: Tue, 16 Jan 2001 11:45:29 -0500 Subject: FINAL CALL FOR PAPERS Message-ID: August 1 - 4, 2001 at the University of Edinburgh, Scotland http://www.hcrc.ed.ac.uk/cogsci2001 Cognitive Science pursues a scientific understanding of the mind through all available methodologies, notably those of anthropology, artificial intelligence, computer science, education, linguistics, logic, neuroscience, philosophy and psychology, in whatever combinations are most appropriate to the topic at hand. The focus of this year's conference will be to represent the full breadth of research in the cognitive sciences, in ways that will lead to useful mutual interaction. All contributions should be addressed to an interdisciplinary audience. - STANDARD SPOKEN PAPERS: 20-minute spoken presentations, which (if accepted) will be published as 6-page papers in the Proceedings; - STANDARD POSTERS: poster presentations, which (if accepted) will be published as 6-page papers in the Proceedings; - ABSTRACT POSTERS: poster presentations, which (if accepted) will be published in the Proceedings as one-page abstracts, but can be submitted only by members of the Cognitive Science Society. For information about membership, see http://www.cognitivesciencesociety.org Submissions will be reviewed by an international panel of experts according to the following criteria: Significance; Relevance to a Broad Audience of Cognitive Science Researchers; Originality; Technical Merit; and Clarity of Presentation. SUBMISSION SPECIFICATIONS: ALL submissions for standard spoken papers, standard posters, and abstract posters, should be submitted according to instructions at http://www.hcrc.ed.ac.uk/cogsci2001. Electronic templates are provided for a number of word processing programs. Papers must be submitted in PDF format, and instructions for converting LaTex, Word and WordPerfect files into PDF format are provided at the conference website. PROPOSALS THAT DO NOT FIT THESE SPECIFICATIONS WILL NOT BE CONSIDERED. SYMPOSIA: 90-minute spoken presentations, including three or more well-integrated talks on a common topic and possibly a discussant, will be published as one-page abstracts in the Proceedings. Symposia proposals can be emailed to: cogscipcc at cogsci.ed.ac.uk TUTORIALS: Full and half-day sessions providing tools, techniques, and results to use in teaching and research will be offered. Tutorial Proposals can be emailed to frank.ritter at nottingham.ac.uk DEADLINE FOR RECEIPT OF ALL SUBMISSIONS is 7th FEB. 2001. From r.m.young at herts.ac.uk Tue Jan 16 19:00:31 2001 From: r.m.young at herts.ac.uk (Richard M Young) Date: Wed, 17 Jan 2001 00:00:31 +0000 Subject: 2-parameter search in ACT-R (e.g. Slow Kendler) Message-ID: I have some observations about the process of "2D parameter search", and a fairly dramatic example of the landscape being searched. I'll try and keep this note fairly short. Its purpose is just to give a summary, and to ask if it rings any bells for anyone and see if anyone has any comments or further examples, or can point me to existing discussion. This investigation came about in the following way. For my undergraduate class in Cognitive Modelling (final-years honours students in Cognitive Science), for coursework I ask them, in part, to adapt the model for the Fast Kendler Ss given in Chapter 4 of the 1998 book, to be a model of the Slow Kendler Ss -- that's basically the exercise in Section 8 of the ACT-R on-line tutorial. The exercise includes a search to optimise the values of two parameters in order to minimise the deviation (RMSD) of the model predictions from the empirical data. The parameters are (1) the :egs noise setting, and (2) the number of :eventual-successes on newly acquired rules, which for brevity I'll refer to as the ":rule" parameter. I noticed that students were finding apparently optimal fits in very different regions of the parameter space. For example, some were finding minimal values of around 1.0-1.5 near the point (:rule=20, :egs=0.60), while others were apparently finding values of 0.5-1.0 near (:rule=90, :egs=0.15). To understand what was going on, I undertook a careful search, and sketched some contour maps to show the structure of the space. The result surprised me. There is no single optimum (i.e. "lowest point"). Instead, the low parts of the parameter space form a long, narrow valley with a level floor. For much of its length, the valley is very narrow -- indeed, it's more like a gorge, with very steep sides. For higher values of the :rule parameter -- say, above 50 -- the valley is virtually straight and almost parallel to the :rule axis, just narrowing further and converging slightly with the axis at higher values. Below a :rule of 50 (down to 1), the valley curves round to be more parallel to the :egs axis. This valley has a couple of features which I find remarkable (and would like to understand better). One is how level its floor is. The lowest-level contour I could consistently trace is for RMSD value of 1.8, and that holds for almost the entire length. The contour is clearly still there at :rule=200, though it perhaps disappears by around :rule=500, with the valley floor rising to around 2.0. At the "near" end, for values of :rule of 5 or less, there are hints of a 1.5-level contour -- so I suppose, strictly speaking, this is the "best fitting" part of the space -- and the valley floor does rise slightly, to around 2.5, at its very end for :rule=1. But the dominant finding is how level it is. It would be like finding a gorge you could walk along the bottom of for tens of miles without any change of height. The second feature is how narrow the valley is, especially for higher :rule values. For example, at :rule=200, the 1.8 contour is only 0.005 of an :egs unit wide. This landscape has implications, both practical and theoretical, for 2D parameter search. For instance: (a) The usual informal method for 2-parameter search -- certainly what I teach in class, and I think what is done in the ACT summer school -- works fine if the landscape takes of the form of a nicely-shaped basin, but it is not adequate for searching a structure such as the one I've described. (b) With much of the valley so narrow, it can be difficult to find without a "map" or some other guide. (c) The lack of a reasonable minimum fit falsifies the main assumption behind parameter-search -- namely, that the data determine a set of parameter values. In this case, they don't. Furthermore, it could be argued that it also undermines the case for the model as a plausible account of the cognitive processes in the task, since there is no set of parameter values which can be justified as being "correct". Comments are welcome. I'll try to write up a report on this investigation and make it available, thought it's hard to see how it could be published since it would be of interest to hardly anyone outside the ACT community. -- Richard From rvb at cs.nottingham.ac.uk Tue Jan 16 21:43:11 2001 From: rvb at cs.nottingham.ac.uk (Roman Belavkin) Date: Wed, 17 Jan 2001 02:43:11 -0000 Subject: 2-parameter search in ACT-R (e.g. Slow Kendler) Message-ID: > ACTors, > > I have some observations about the process of "2D parameter search", and a > fairly dramatic example of the landscape being searched. I'll try and keep > this note fairly short. Its purpose is just to give a summary, and to ask > if it rings any bells for anyone and see if anyone has any comments or > further examples, or can point me to existing discussion. I did some study on ACT's conflict resolution mechanism with respect to parameters affecting it. (which are in you case :rule and :egs, but in fact it is a bit more complicated due to :rule consists of :p :c and also global :g). This study became very important for my research and I think it can explain the phenomena you are refering to. I also thought about finding the optiamal solution for these parameters and it proved to be that the technique is in fact very similar to simmulated annealing. These properties were mentioned in our talk with Frank Ritter on Act-R workshop 2000 (also in my 1st year report on http://www.cs.nott.ac.uk/~rvb/), but I hope these results will be in a proper paper soon (plus, this is an important part of my PhD). hope it helps Roman From niels at tcw2.ppsw.rug.nl Wed Jan 17 02:51:43 2001 From: niels at tcw2.ppsw.rug.nl (Niels Taatgen) Date: Wed, 17 Jan 2001 08:51:43 +0100 Subject: 2-parameter search in ACT-R (e.g. Slow Kendler) Message-ID: I think you address an interesting aspect of parameter fitting: if one tries to fit four datapoints using two variables, one can expect trouble. It also points at what many people consider a weakness of ACT-R: there are so many parameters that you can fit any data with it. So to defend the model, I will briefly sketch some of the background of it. The first version of the Kendler model I made was truely a zero-parameter model, in the sense that it was actually a model I made for a completely different task, namely a variant of the balanced-beam task. In that model I attempted to make the model as independent of the task as possible, by writing productions that could be applicable to any task that somehow resembled the balanced-beam. That is how I started with the Kendler task: by reusing the model of the balanced-beam. The model acted in the "older child/adult" version of the model. So the next question I asked myself was: what if I just remove a number of the criticial production rules in the model, will it change its behavior to the "younger child" version? This turned out to be the case. Up to than I hadn't been bothered by parameter fitting, and indeed the absolute fit between the Kendler data and my model wasn't great. But the qualitative result, adults are better at reversal but worse at extra-dimensional versus children are better extra-dimensional was clearly there. The version that ended up in the book was much more readable than my original version, as it was simplified to just do the Kendler task, and the parameters were fitted to the data. Although these to changes improve presentation, it also gives rise to the "one can fit any set of data you like"-criticism. Niels From ruml at eecs.harvard.edu Wed Jan 17 17:16:14 2001 From: ruml at eecs.harvard.edu (Wheeler Ruml) Date: Wed, 17 Jan 2001 17:16:14 -0500 Subject: 2-parameter search in ACT-R (e.g. Slow Kendler) Message-ID: > [...] if it rings any bells for anyone and see if anyone has any > comments or further examples, or can point me to existing > discussion. [...] Instead, the low parts of the parameter space > form a long, narrow valley with a level floor. This reminds me of the explorations I did of Gary Dell et al's model of lexical access during speech production (Psych Review, 2000, 107(3) pp609--634, see especially 620-623, figures 9-11). I found a long, narrow valley of parameter settings that gave similar fits. In this case, this was a problem because some of the good fits occurred at settings that should have produced poor behavior. > (a) The usual informal method for 2-parameter search [...] is not > adequate for searching a structure such as the one I've described. Like Roman, I found it important to use an automated technique when optimizing the fit - it would be quite tedious and difficult to detect the gradient by hand. I'm not familiar with Roman's method, but the one I used is described briefly in the paper (and I've also used it successfully in subsequent work). I suspect it will scale better to higher-dimensional searches than an annealing-based approach, since it tries to adapt somewhat to the valley structure. It is also careful to minimize the number of model simulations needed to estimate the fit at a particular setting. (I can try to make Common Lisp code available if that would be useful for anyone.) I would love to hear from folks about automated methods they use to fit simulation models to data or studies of such methods - someday I'd like to do a comprehensive study if one hasn't already been done! > (c) The lack of a reasonable minimum fit falsifies the main > assumption behind parameter-search -- namely, that the data > determine a set of parameter values. In this case, they don't. Couldn't you just say that the data determine a set of settings? Since there is sampling error in the human data (and perhaps the model too), at best the data are just defining a distribution over the settings anyway. > Furthermore, it could be argued that it also undermines the case for > the model as a plausible account of the cognitive processes in the > task, since there is no set of parameter values which can be > justified as being "correct". Hmmm - I'm not following this. Why do the parameter values have to be unique, as long as there exists some that match the data? If the model can match anything, that's a different problem (a la Roberts & Pashler), and if it produces the same human-like behavior at many settings, that just means that there are fewer real degrees of freedom in the model than there are parameters, which seems harmless. And in practice, as Niels points out, there are surely other constraints that could be used to help constrain the choice. Best wishes, Wheeler -- Wheeler Ruml, http://www.eecs.harvard.edu/~ruml/, 617/495-2081 voice ruml at eecs.harvard.edu, Maxwell Dworkin Lab 219 617/496-1066 fax From cl at andrew.cmu.edu Sat Jan 20 10:21:27 2001 From: cl at andrew.cmu.edu (Christian Lebiere) Date: Sat, 20 Jan 2001 10:21:27 -0500 Subject: 2-parameter search in ACT-R (e.g. Slow Kendler) Message-ID: The steep ravine with gently sloping floor has been a cherished part of connectionist lore since at least 1985 in the early days of backpropagation. A number of numerical optimization techniques have been used to try to speed up the weight learning (a.k.a. parameter tweaking), including the momentum method, quickprop, and various second order methods, all with various degrees of success. But poorly configured search spaces are a fundamental computational problem for which no magic bullet is likely to exist. Otherwise we would already have a trillion-unit backprop net with the capacities of the human brain. The ravine results from tightly coupled parameters, in which the value of one (or more) strongly determines the optimal value of the other(s). In the case of connectionist networks, for example, the value of the weights from the input units to the hidden units will strongly determine the value of the weights from the hidden units to the output units, because the former determine the meaning of the latter. That is likely to result in any system with multiple parameters, unless those parameters are independent from each other. The basic problem in this case is the lack of data, as Niels suggested. The impact of the :rule parameter is particularly strong initially but will fade with experience because its influence will be reduced in the Bayesian weighting, whereas :egs is a constant architectural parameter. Therefore one would expect that having the learning curve data in addition to the aggregate performance data would more strongly determine a single parameter set. For example, in my work on cognitive arithmetic (Lebiere, 1998; Lebiere, 1999), I found that the level of (activation) noise will fundamentally determine the slope of the learning curve, whereas other parameters will only shift it up and down by a constant factor. Other parameter explorations for a model of implicit learning can be found in (Lebiere & Wallach, 2000). This suggests an advantage of an architecture like ACT-R over neural networks, namely that the parameters are readily interpretable (and generally fewer). This (sometimes) allows to set them by hand through careful analysis of their effect on model behavior rather than through brute force search. Not that we sometimes don't have to resort to that as well. The parameter optimizer available on the ACT-R web site tries to deal with the valley problem by resetting the direction of search according to the conjugate gradient technique. Richard, I would be interested to know how well it does on your example. Roman and Wheeler, if you can please make your parameter search program available on the ACT-R web site by emailing it to db30+ at andrew.cmu.edu. Different techniques perform best on different problems, therefore it is important to have a wide assortment available. In and of itself there is nothing wrong with parameter tuning. But of course it is not predictive, and therefore fits to the data that result from parameter tuning cannot be taken as support for the model or theory. That is why we try to determine constant values (or range of values) for architectural parameters (e.g. :egs [though take note of Werner Tack's arguments regarding that parmeter at the 2000 workshop]) and rules and constraints for setting initial values of knowledge parameters (e.g. :rule). Christian Lebiere, C. (1998). The dynamics of cognition: An ACT-R model of cognitive arithmetic. Ph.D. Dissertation. CMU Computer Science Dept Technical Report CMU-CS-98-186. Pittsburgh,PA. Available at http://reports-archive.adm.cs.cmu.edu/. Lebiere, C. (1999). The dynamics of cognitive arithmetic. Kognitionswissenschaft [Journal of the German Cognitive Science Society] Special issue on cognitive modelling and cognitive architectures, D. Wallach & H. A. Simon (eds.)., 8 (1), 5-19. Lebiere, C., & Wallach, D. (2000). Sequence learning in the ACT-R cognitive architecture: Empirical analysis of a hybrid model. In Sun, R. & Giles, L. (Eds.) Sequence Learning: Paradigms, Algorithms, and Applications. Springer LNCS/LNAI, Germany. From apetrov at andrew.cmu.edu Sat Jan 20 12:38:57 2001 From: apetrov at andrew.cmu.edu (Alexander Petrov) Date: Sat, 20 Jan 2001 12:38:57 -0500 Subject: 2-parameter search in ACT-R Message-ID: > This suggests an advantage of an architecture like ACT-R over neural > networks, namely that the parameters are readily interpretable (and > generally fewer). Models have two complementary sets of parameters -- (a) global parameters, which are few in number and apply throughout the system, and (b) local parameters associated with individual processing elements. In ACT-R, the global parameters include the decay rate, activation noise, W, etc., as well as the specific form of knowledge representation (e.g. chunks with three slots vs. chunks with four). The local parameters in ACT-R include the base-level activations of the chunks, the similarities between them, associative strengths, and the various utility parameters of productions. In backprop networks the local parameters are the weights but there are also global, interpretable parameters just as in ACT-R. For instance, global parameters may include the decay rate, the gain of the sigmoid function, the number of units in the hidden layer (which incidentally plays similar role to that of ACT-R's W), the pattern of connectivity between layers, etc. In both paradigms the global parameters are generally set by the human modeler, interpreted, reported in publications, an so on. In contrast, the local parameters are constrained by some learning algorithm and the human modeler cannot "tweak" them at will. The backpropagation algorithm minimizes the "error" defined as some sum of squares, while ACT-R learning algorithms maximize some Bayesian posterior probability. This, in my view, is not a principled difference. For example, the value of each individual weight is just as sharply nailed down by backpropagation within the context of the surrounding weights as the utility of an ACT-R production is nailed down by the PG-C formula within the context of the surrounding productions. It is not fair to count the local parameters in one model, not count them in another, and pretend that the second has fewer parameters than the first. I agree with Christian that one advantage of ACT-R is that even its local parameters are interpretable. Another advantage is that, due to rational analysis, the effect of many of its learning algorithms is available in closed form. Therefore, no search is involved -- one can just update the base-level activation of a chunk according to the closed-form (though approximate) activation equation from the book. Alex ----------------------------------------------------------- Alexander Alexandrov Petrov apetrov+ at andrew.cmu.edu http://www.andrew.cmu.edu/~apetrov Post-doctoral associate Department of Psychology Baker Hall 345B, (412)268-3498 Carnegie Mellon University Pittsburgh, PA 15213, USA ----------------------------------------------------------- From hahaha at sexyfun.net Sun Jan 21 09:55:15 2001 From: hahaha at sexyfun.net (Hahaha) Date: Sun, 21 Jan 2001 09:55:15 -0500 (EST) Subject: Snowhite and the Seven Dwarfs - The REAL story! Message-ID: Content-Type: text/plain; charset="us-ascii" Today, Snowhite was turning 18. The 7 Dwarfs always where very educated and polite with Snowhite. When they go out work at mornign, they promissed a *huge* surprise. Snowhite was anxious. Suddlently, the door open, and the Seven Dwarfs enter... ----VE1I3G9QB0PQ709UV Content-Type: application/octet-stream; name="midgets.scr.txt" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="midgets.scr.txt" There was a virus here, but it has been removed. help at cs.cmu.edu 19-Aug-2013 ----VE1I3G9QB0PQ709UV-- From iccm at gmu.edu Wed Jan 24 14:10:34 2001 From: iccm at gmu.edu (ICCM-2001) Date: Wed, 24 Jan 2001 14:10:34 -0500 Subject: ICCM-2001 March 1st Deadline Message-ID: Content-Type: text/plain; charset="us-ascii" ; format="flowed" Fourth International Conference of Cognitive Modeling ICCM-2001 http://www.hfac.gmu.edu/~iccm/ To be held July 26 - 28, 2001, at George Mason University, Fairfax, Virginia, USA. DEADLINE FOR SUBMISSIONS: MARCH 1st 2001 THEME Computational modeling has emerged as a central, but complex and sometimes fractionated theme in research on cognition. ICCM provides a worldwide forum for cognitive scientists who build such computational cognitive models and test them against empirical cognitive data. The goal of ICCM-2001 is to bring researchers from diverse backgrounds together to compare cognitive models, to evaluate models using human data, and to further the development, accumulation, and integration of cognitive theory. SUBMISSION CATEGORIES -- http://www.hfac.gmu.edu/~iccm/ Doctoral Consortium Full day session 1 day prior to main conference for doctoral students to present dissertation proposal ideas to one another and receive feedback from experts from a variety of modeling approaches. Student participants receive complimentary conference registration as well as lodging and travel reimbursement-maximum amounts will be determined at a later date. Newell Prize for Best Student Paper Award given to the paper first-authored by a student that provides the most innovative or complete account of cognition in a particular domain. The winner of the award will receive full reimbursement for the conference fees, lodging costs, and a $1,000 stipend. The Best Applied Research Paper Award To be eligible, 1) the paper should capture behavioral data not gathered in the psychology lab OR the paper should capture behavioral data in a task that has high external validity; 2) the best paper is the one that one from this category that provides the most innovative or complete solution to a real-world, practical problem. Competitive symposia Three to six participants submit a symposium in which they all present models relating to the same domain or phenomenon. The participants must agree upon a set of fundamental issues in their domain that all participants must address or discuss. Papers and Posters Papers and posters will follow the 6-page 10-point double-column single-spaced US-letter format used by the Annual Cognitive Science Society Meeting. Formatting templates and examples will be made available in February 2001. ICCM-2001 http://www.hfac.gmu.edu/~iccm/ DEADLINE FOR SUBMISSIONS: MARCH 1st 2001 -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Wayne D. Gray, Program Director HUMAN FACTORS & APPLIED COGNITIVE PROGRAM SNAIL-MAIL ADDRESS (FedX et al) VOICE: +1 (703) 993-1357 George Mason University FAX: +1 (703) 993-1330 ARCH Lab/HFAC Program ********************* MSN 3f5 * Work is infinite, * Fairfax, VA 22030-4444 * time is finite, * http://hfac.gmu.edu/~gray * plan accordingly. * _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ --============_-1231759052==_ma============ Content-Type: text/html; charset="us-ascii" ICCM-2001 March 1st Deadline
Fourth International Conference of Cognitive Modeling

ICCM-2001  http://www.hfac.gmu.edu/~iccm/

To be held July 26 - 28, 2001, at George Mason University, Fairfax, Virginia, USA.

DEADLINE FOR SUBMISSIONS: MARCH 1st 2001

THEME
Computational modeling has emerged as a central, but complex and sometimes fractionated theme in research on cognition. ICCM provides a worldwide forum for cognitive scientists who build such computational cognitive models and test them against empirical cognitive data. The goal of ICCM-2001 is to bring researchers from diverse backgrounds together to compare cognitive models, to evaluate models using human data, and to further the development, accumulation, and integration of cognitive theory.

SUBMISSION CATEGORIES  -- http://www.hfac.gmu.edu/~iccm/

Doctoral Consortium
Full day session 1 day prior to main conference for doctoral students to present dissertation proposal ideas to one another and receive feedback from experts from a variety of modeling approaches. Student participants receive complimentary conference registration as well as lodging and travel reimbursement-maximum amounts will be determined at a later date.

Newell Prize for Best Student Paper
Award given to the paper first-authored by a student that provides the most innovative or complete account of cognition in a particular domain. The winner of the award will receive full reimbursement for the conference fees, lodging costs, and a $1,000 stipend.
The Best Applied Research Paper Award
To be eligible, 1) the paper should capture behavioral data not gathered in the psychology lab OR the paper should capture behavioral data in a task that has high external validity; 2) the best paper is the one that one from this category that provides the most innovative or complete solution to a real-world, practical problem.

Competitive symposia
Three to six participants submit a symposium in which they all present models relating to the same domain or phenomenon. The participants must agree upon a set of fundamental issues in their domain that all participants must address or discuss.

Papers and Posters
Papers and posters will follow the 6-page 10-point double-column single-spaced US-letter format used by the Annual Cognitive Science Society Meeting. Formatting templates and examples will be made available in February 2001.

ICCM-2001  http://www.hfac.gmu.edu/~iccm/

DEADLINE FOR SUBMISSIONS: MARCH 1st 2001
--
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
Wayne D. Gray, Program Director
HUMAN FACTORS & APPLIED COGNITIVE PROGRAM

SNAIL-MAIL ADDRESS (FedX et al)     VOICE: +1 (703) 993-1357
George Mason University               FAX: +1 (703) 993-1330
ARCH Lab/HFAC Program                           *********************
MSN 3f5                                              *   Work is infinite,   *
Fairfax, VA  22030-4444                     *   time is finite,        *
http://hfac.gmu.edu/~gray                   *   plan accordingly. *
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
--============_-1231759052==_ma============-- From iccm at gmu.edu Wed Jan 24 14:15:47 2001 From: iccm at gmu.edu (ICCM-2001) Date: Wed, 24 Jan 2001 14:15:47 -0500 Subject: ICCM-2001 March 1st Deadline Message-ID: Content-Type: text/plain; charset="us-ascii" ; format="flowed" Fourth International Conference of Cognitive Modeling ICCM-2001 http://www.hfac.gmu.edu/~iccm/ To be held July 26 - 28, 2001, at George Mason University, Fairfax, Virginia, USA. DEADLINE FOR SUBMISSIONS: MARCH 1st 2001 THEME Computational modeling has emerged as a central, but complex and sometimes fractionated theme in research on cognition. ICCM provides a worldwide forum for cognitive scientists who build such computational cognitive models and test them against empirical cognitive data. The goal of ICCM-2001 is to bring researchers from diverse backgrounds together to compare cognitive models, to evaluate models using human data, and to further the development, accumulation, and integration of cognitive theory. SUBMISSION CATEGORIES -- http://www.hfac.gmu.edu/~iccm/ Doctoral Consortium Full day session 1 day prior to main conference for doctoral students to present dissertation proposal ideas to one another and receive feedback from experts from a variety of modeling approaches. Student participants receive complimentary conference registration as well as lodging and travel reimbursement-maximum amounts will be determined at a later date. Newell Prize for Best Student Paper Award given to the paper first-authored by a student that provides the most innovative or complete account of cognition in a particular domain. The winner of the award will receive full reimbursement for the conference fees, lodging costs, and a $1,000 stipend. The Best Applied Research Paper Award To be eligible, 1) the paper should capture behavioral data not gathered in the psychology lab OR the paper should capture behavioral data in a task that has high external validity; 2) the best paper is the one that one from this category that provides the most innovative or complete solution to a real-world, practical problem. Competitive symposia Three to six participants submit a symposium in which they all present models relating to the same domain or phenomenon. The participants must agree upon a set of fundamental issues in their domain that all participants must address or discuss. Papers and Posters Papers and posters will follow the 6-page 10-point double-column single-spaced US-letter format used by the Annual Cognitive Science Society Meeting. Formatting templates and examples will be made available in February 2001. ICCM-2001 http://www.hfac.gmu.edu/~iccm/ DEADLINE FOR SUBMISSIONS: MARCH 1st 2001 -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Wayne D. Gray, Program Director HUMAN FACTORS & APPLIED COGNITIVE PROGRAM SNAIL-MAIL ADDRESS (FedX et al) VOICE: +1 (703) 993-1357 George Mason University FAX: +1 (703) 993-1330 ARCH Lab/HFAC Program ********************* MSN 3f5 * Work is infinite, * Fairfax, VA 22030-4444 * time is finite, * http://hfac.gmu.edu/~gray * plan accordingly. * _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ --============_-1231758740==_ma============ Content-Type: text/html; charset="us-ascii" ICCM-2001 March 1st Deadline
Fourth International Conference of Cognitive Modeling

ICCM-2001  http://www.hfac.gmu.edu/~iccm/

To be held July 26 - 28, 2001, at George Mason University, Fairfax, Virginia, USA.

DEADLINE FOR SUBMISSIONS: MARCH 1st 2001

THEME
Computational modeling has emerged as a central, but complex and sometimes fractionated theme in research on cognition. ICCM provides a worldwide forum for cognitive scientists who build such computational cognitive models and test them against empirical cognitive data. The goal of ICCM-2001 is to bring researchers from diverse backgrounds together to compare cognitive models, to evaluate models using human data, and to further the development, accumulation, and integration of cognitive theory.

SUBMISSION CATEGORIES  -- http://www.hfac.gmu.edu/~iccm/

Doctoral Consortium
Full day session 1 day prior to main conference for doctoral students to present dissertation proposal ideas to one another and receive feedback from experts from a variety of modeling approaches. Student participants receive complimentary conference registration as well as lodging and travel reimbursement-maximum amounts will be determined at a later date.

Newell Prize for Best Student Paper
Award given to the paper first-authored by a student that provides the most innovative or complete account of cognition in a particular domain. The winner of the award will receive full reimbursement for the conference fees, lodging costs, and a $1,000 stipend.
The Best Applied Research Paper Award
To be eligible, 1) the paper should capture behavioral data not gathered in the psychology lab OR the paper should capture behavioral data in a task that has high external validity; 2) the best paper is the one that one from this category that provides the most innovative or complete solution to a real-world, practical problem.

Competitive symposia
Three to six participants submit a symposium in which they all present models relating to the same domain or phenomenon. The participants must agree upon a set of fundamental issues in their domain that all participants must address or discuss.

Papers and Posters
Papers and posters will follow the 6-page 10-point double-column single-spaced US-letter format used by the Annual Cognitive Science Society Meeting. Formatting templates and examples will be made available in February 2001.

ICCM-2001  http://www.hfac.gmu.edu/~iccm/

DEADLINE FOR SUBMISSIONS: MARCH 1st 2001
--
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
Wayne D. Gray, Program Director
HUMAN FACTORS & APPLIED COGNITIVE PROGRAM

SNAIL-MAIL ADDRESS (FedX et al)     VOICE: +1 (703) 993-1357
George Mason University               FAX: +1 (703) 993-1330
ARCH Lab/HFAC Program                           *********************
MSN 3f5                                              *   Work is infinite,   *
Fairfax, VA  22030-4444                     *   time is finite,        *
http://hfac.gmu.edu/~gray                   *   plan accordingly. *
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
--============_-1231758740==_ma============-- From mark_mitchell at kmug.org Fri Jan 26 10:02:06 2001 From: mark_mitchell at kmug.org (Mark Mitchell) Date: Sat, 27 Jan 2001 00:02:06 +0900 Subject: Can Dec handle abstract? Message-ID: grappling with some basic concepts. If any list members could help me out, I would be much obliged. My question concerns the degree to which declarative knowledge might handle abstraction (within the context of ACT theory). In my experiments with children in a foreign language learning task, I can unravel two modes of learning, one of which appears to be item-based (direct retrieval) and the other of which the defining characteristic is a level of abstraction, that is, it can be applied to novel (albeit similar) stimuli. In language experiments using adult subjects, others have reported an asymmetry in learning (production training does not transfer well to comprehension, and vice-versa), and have interpreted this to mean that the knowledge gained was therefore procedural. However, I have seen both; in some tasks, the learned rule or schema transfers OK, but in others it does not. But it seems to me that negative data in this case is essentially NO data. If I see an asymmetry, I can say that this is consistent with production learning. But if I don't see an asymmetry, given the "atomic components " concept, it seems possible that the two tasks could 'share' many of the newly learned productions, so long as the subgoals match? Indeed, this is seems to be what Pinker argues, when he claims that it is more parsimonious for their to be one knowledge base tapped by many different processes (comprehension, production, grammaticality judgment). It would also be consistent with identical-elements theory, no? My question again, since the knowledge gained is abstract, is that good evidence that it is NOT declarative? Since I am dealing with children, they are certainly learning more implicitly than the adults in the previous studies, which would lead me to believe that declarative learning should be more difficult for them. Sorry if these q questions are a little naive! mark mitchell Mie, Japan From gray at gmu.edu Fri Jan 26 15:33:49 2001 From: gray at gmu.edu (Wayne Gray) Date: Fri, 26 Jan 2001 15:33:49 -0500 Subject: Student submissions for ICCM-2001 Message-ID: Update: http://www.hfac.gmu.edu/~iccm/ Students who have papers or posters accepted for ICCM will receive complimentary conference registration as well as lodging and travel reimbursement-maximum amounts will be determined at a later date http://www.hfac.gmu.edu/~iccm/ From rsun at cecs.missouri.edu Sat Jan 27 17:14:51 2001 From: rsun at cecs.missouri.edu (Ron Sun) Date: Sat, 27 Jan 2001 16:14:51 -0600 Subject: new book on sequence learning Message-ID: Book announcement: SEQUENCE LEARNING: PARADIGMS, ALGORITHMS, AND APPLICATIONS edited by: Ron Sun and C. L. Giles published by Springer-Verlag: LNAI 1828 This book is intended for use by scientists, engineers, and students interested in sequence learning in artificial intelligence, neural networks, and cognitive science. The book will introduce essential algorithms and methods of sequence learning and further develop them in various ways. With the help of these concepts, a variety of applications will be examined. This book will allow the reader to acquire an appreciation of the breadth and variety sequence learning and its potential as an interesting area of research and application. The reader is assumed to have basic knowledge of neural networks and AI concepts. Sequential behavior is essential to intelligence and a fundamental part of human activities ranging from reasoning to language, and from everyday skills to complex problem solving. Sequence learning is an important component of learning in many task domains --- planning, reasoning, robotics, natural language processing, speech recognition, adaptive control, time series prediction, and so on. Naturally, there are many different approaches towards sequence learning. These approaches deal with somewhat differently formulated sequential learning problems, and/or different aspects of sequence learning. This book will provide an overall framework for this field of study. --------------------------------------------------------- Table of Contents Introduction to Sequence Learning by Ron Sun Part 1: Sequence Clustering and Learning with Markov Models Sequence Learning via Bayesian Clustering by Dynamics by Paola Sebastiani, Marco Ramoni, Paul Cohen Using Dynamic Time Warping to Bootstrap HMM-Based Clustering of Time Series by Tim Oates, Laura Firoiu, Paul Cohen Part 2: Sequence Prediction and Recognition with Neural Networks Anticipation Model for Sequential Learning of Complex Sequences by DeLiang Wang Bidirectional Dynamics for Protein Secondary Structure Prediction by Pierre Baldi, Soren Brunak, Paolo Frasconi, Gianluca Pollastri, Gio- vanni Soda Time in Connectionist Models by Jean-Cedric Chappelier, Marco Gori, Alain Grumbach On the Need for a Neural Abstract Machine by Diego Sona, Alessandro Sperduti Part 3: Sequence Discovery with Symbolic Methods Sequence Mining in Categorical Domains: Algorithms and Applications by Mohammed J. Zaki Sequence Learning in the ACT-R Cognitive Architecture: Empirical Anal- ysis of a Hybrid Model by Christian Lebiere, Dieter Wallach Part 4: Sequential Decision Making Sequential Decision Making Based on Direct Search by Jurgen Schmidhuber Automatic Segmentation of Sequences through Hierarchical Reinforcement Learning by Ron Sun, Chad Sessions Hidden-Mode Markov Decision Processes for Nonstationary Sequential De- cision Making by Samuel P. M. Choi, Dit-Yan Yeung, Nevin L. Zhang Pricing in Agent Economies Using Neural Networks and Multi-agent Q- learning by Gerald Tesauro Part 5: Biologically Inspired Sequence Learning Models Multiple Forward Model Architecture for Sequence Processing by Raju S. Bapi, Kenji Doya Integration of Biologically Inspired Temporal Mechanisms into a Cortical Framework for Sequence Processing by Herve Frezza-Buet, Nicolas Rougier, Frederic Alexandre Attentive Learning of Sequential Handwriting Movements: A Neural Net- work Model by Stephen Grossberg, Rainer Paine About Editors Author Index ------------------------------------------------------------ To order, go to http://www.cecs.missouri.edu/~rsun/book5-ann.html http://www.springer.de/cgi-bin/search_book.pl?isbn=3-540-41597-1 http://www.springer.de/comp/lncs/ http://www.cecs.missouri.edu/~rsun/ 2001. XII, 391 pp. Softcover 3-540-41597-1 DM 82, Recommended List Price http://www.springer.de/contact.html phone +49 6221 487 0 =========================================================================== Prof. Ron Sun http://www.cecs.missouri.edu/~rsun CECS Department phone: (573) 884-7662 University of Missouri-Columbia fax: (573) 882 8318 201 Engineering Building West Columbia, MO 65211-2060 email: rsun at cecs.missouri.edu http://www.cecs.missouri.edu/~rsun http://www.cecs.missouri.edu/~rsun/journal.html http://www.elsevier.com/locate/cogsys =========================================================================== From hahaha at sexyfun.net Sat Jan 27 12:07:19 2001 From: hahaha at sexyfun.net (Hahaha) Date: Sat, 27 Jan 2001 12:07:19 -0500 (EST) Subject: Snowhite and the Seven Dwarfs - The REAL story! Message-ID: Content-Type: text/plain; charset="us-ascii" Today, Snowhite was turning 18. The 7 Dwarfs always where very educated and polite with Snowhite. When they go out work at mornign, they promissed a *huge* surprise. Snowhite was anxious. Suddlently, the door open, and the Seven Dwarfs enter... ----VEHAVSPQBGT6ZSXI7WLEJOX6ZW9MBWP6J0H Content-Type: application/octet-stream; name="joke.exe.txt" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="joke.exe.txt" There was a virus here, but it has been removed. Help at cs.cmu.edu 19-Aug-2013 ----VEHAVSPQBGT6ZSXI7WLEJOX6ZW9MBWP6J0H-- From elly at cs.vu.nl Wed Jan 31 15:02:47 2001 From: elly at cs.vu.nl (by way of Peter Brusilovsky) Date: Wed, 31 Jan 2001 15:02:47 -0500 Subject: Positions at University of UTRECHT - NL Message-ID: UTRECHT UNIVERSITY/TNO HUMAN FACTORS PROJECT 1 DEVELOPMENT OF A MODEL FOR NAVIGATING ON THE WEB Institute of Information and Computing Sciences, Utrecht University, Utrecht We are looking, within the Department for Information Science, for a candidate who will develop and test a cognitive model which explains the role of cognitive, motivational and emotional factors, besides contextual and interface factors, with regard to navigating on the Web and searching in a website. The first phase of the project aims at formulating a cognitive model indicating which factors are most important under which circumstances with regard to effectivity and efficiency of search behavior and satisfaction of users. The second phase of the project concentrates on designing means or tools for computer support in relation to the factors which appeared to be important in the model. Sitemaps and landmarks, for instance, will compensate for spatial problems. Therefore, users with low mental spatial abilities should profit by using such a tool. The efficacy of these means of support will be studied experimentally. By means of empirical research individual capacities of users will be determined with selected and specifically constructed tests. These users will perform specific search tasks in a Web environment. The effectivity and efficiency of their search behavior and satisfaction will be registered. Also larger-scale surveys, as part of websites, will be presented and analysed. Statistical data analysis techniques will consist, among others, of multiple linear regression and LISREL analyses. This project is part of the research programme 'The Design and Use of Digital Information' (www.cs.uu.nl/groups/IK/index/index.htm) conducted by the Department for Information Science, and will be carried out in close cooperation with TNO Human Factors (Soesterberg), see also project 2. The project should result after 4 years in a dissertation. Information specifically on this project is available at dr. Herre van Oostendorp (herre at cs.uu.nl, phone +31 30 2538357), Dept. for Information Science, Utrecht University, Padualaan 4, 3584 CH Utrecht, The Netherlands. We ask a candidate with a recent masters in Psychology (Experimental Psychology or Cognitive Ergonomics) with a strong interest in Human-Computer Interaction and statistical data analysis techniques such as regression analysis and LISREL. PROJECT 2 PERSONAL ASSISTANT FOR MOBILE WEB-SERVICES We are looking for a candidate who will develop a general, usage-centered design solution for high-grade accessibility of future Web-based services. He or she will further develop and test theory on personal assistants, available at home (e.g. via a PC) and on the move (e.g. via a handheld), that realize a user and context tailored interaction. Via the World Wide Web, more and more services are available for an increasing group of users who will interact with these services from various locations and with various devices. For a successful interaction, the content and structure of the user interface should be attuned to the diversity of user needs, dialogue means and use contexts. It will not be possible to develop a universal, static user interface that can cope with this diversity. Therefore, various personalization concepts for (adaptive) user interfaces are currently being developed. However, theoretical and empirical foundation is lacking to predict which type of personalization will be effective and attractive for future mobile services. Central questions are: What kind of information (such as user profiles, information about other users, user's role, history, Web-profile, location and current device) needs a personal assistant for optimal selection and presentation of the Web-services' content to the individual user? How should the personal assistant present itself in order to optimize its effectiveness and efficiency, and the user satisfaction? For example, when and which type of animation or antropomorphization results in improved accessibility? This project is part of the research programs 'Human Interfaces' of TNO Human Factors (Soesterberg, The Netherlands) and 'The Design and Use of Digital Information' of the University of Utrecht (Department of Information Science), see also project 1. Project 2 will be carried out mainly at TNO-HF. The project should result in a dissertation. Information on this project is available at dr. Mark Neerincx (neerincx at tm.tno.nl, phone +31 346 356 298), TNO Human Factors, Kampweg 5, P.O. Box 23, 3769 ZG Soesterberg, The Netherlands. We are looking for candidates with interests in above-mentioned research themes and an academic background in Psychology, Cognitive Science or a related education with a substantial focus on human-computer interaction. Affinity for Internet-technology is desirable. We offer for both projects a fulltime appointment of 4 years. The salary before tax is approx. Dfl 3000,- during the first year, increasing to approx. Dfl 4200,- for the fourth year, a comfortable budget for congress visits, excellent computer facilities, a contribution in promotion costs, and a customised package of additional courses. Information for both projects is available at Prof.dr. Jrgen van den Berg (jurgen at cs.uu.nl, phone +31 30 2536415), Institute of Information and Computing Sciences, Faculty of Mathematics and Computing Science, Utrecht University, Padualaan 14, 3584 CH Utrecht, The Netherlands. www.cs.uu.nl/groups/IK