From s.k.mehlhorn at rug.nl Sat Dec 10 04:52:13 2016 From: s.k.mehlhorn at rug.nl (Katja Mehlhorn) Date: Sat, 10 Dec 2016 10:52:13 +0100 Subject: [ACT-R-users] Groningen Spring School on Cognitive Modeling Message-ID: Groningen Spring School on Cognitive Modeling ? ACT-R, Nengo, PRIMs, & Accumulator Models ? Date: April 3-7, 2017 Location: Groningen, the Netherlands Fee: ? 250 (late fee ?50 after February 15) More information and registration: www.ai.rug.nl/springschool We would like to invite you to the 2017 Groningen Spring School on Cognitive Modeling. As last year, the Spring School will cover four different modeling paradigms: ACT-R, Nengo, PRIMs, and Accumulator models. It thereby offers a unique opportunity to learn the relative strengths and weaknesses of these approaches. Each day will consist of four theory lectures, one on each paradigm. Each modeling paradigm also includes hands-on assignments. Although students are free to chose the number of lectures they attend, we recommend you to sign up for lectures on two of the modeling paradigms, and complete the tutorial units for one of the paradigms. At the end of each day there will be a plenary research talk, to show how these different approaches to modeling are applied. The Spring School will be concluded with a keynote lecture and a conference dinner. We are excited to announce that Sander Bohte has accepted our invitation and will be the keynote speaker. Admission is limited, so register soon! Please feel free to forward this email and the attached flyer to others who might be interested! ACT-R Teachers: Jelmer Borst, Hedderik van Rijn, Katja Mehlhorn (University of Groningen) Website: http://act-r.psy.cmu.edu. ACT-R is a high-level cognitive theory and simulation system for developing cognitive models for tasks that vary from simple reaction time experiments to driving a car, learning algebra, and air traffic control. ACT-R can be used to develop process models of a task at a symbolic level. Participants will follow a compressed five-day version of the traditional summer school curriculum. We will also cover the connection between ACT-R and fMRI. Nengo Teacher: Terry Stewart (University of Waterloo) Website: http://www.nengo.ca Nengo is a toolkit for converting high-level cognitive theories into low-level spiking neuron implementations. In this way, aspects of model performance such as response accuracy and reaction times emerge as a consequence of neural parameters such as the neurotransmitter time constants. It has been used to model adaptive motor control, visual attention, serial list memory, reinforcement learning, Tower of Hanoi, and fluid intelligence. Participants will learn to construct these kinds of models, starting with generic tasks like representing values and positions, and ending with full production-like systems. There will also be special emphasis on extracting various forms of data out of a model, such that it can be compared to experimental data. PRIMs Teacher: Niels Taatgen (University of Groningen) Website: http://www.ai.rug.nl/~niels/actransfer.html How do people handle and prioritize multiple tasks? How can we learn something in the context of one task, and partially benefit from it in another task? The goal of PRIMs is to cross the artificial boundary that most cognitive architectures have imposed on themselves by studying single tasks. It has mechanisms to model transfer of cognitive skills, and the competition between multiple goals. In the tutorial we will look at how PRIMs can model phenomena of cognitive transfer and cognitive training, and how multiple goals compete for priority in models of distraction. Accumulator Models Teacher: Marieke van Vugt, Don van Ravenzwaaij (University of Groningen), & Martijn Mulder (University of Amsterdam) Decisions can be described in terms of a process of evidence accumulation, modeled with a drift diffusion mechanism. The advantage of redescribing the behavioral data with an accumulator model is that those can be decomposed into more easily-interpretable cognitive mechanisms such as speed-accuracy trade-off or quality of attention. In this course, you will learn about the basic mechanisms of drift diffusion models and apply it to your own dataset (if you bring one). You will also see some applications of accumulator models in the context of neuroscience and individual differences. ------------------------------------------ Katja Mehlhorn, Docent University of Groningen http://www.ai.rug.nl/~katja/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: groningen_springschool.pdf Type: application/pdf Size: 223874 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From db30 at andrew.cmu.edu Wed Dec 21 10:18:55 2016 From: db30 at andrew.cmu.edu (db30 at andrew.cmu.edu) Date: Wed, 21 Dec 2016 10:18:55 -0500 Subject: [ACT-R-users] New ACT-R software release available Message-ID: <451C49CDDB5DF6374FC2A287@actr6b.psy.cmu.edu> A new version of the ACT-R 7 software is now available from the ACT-R website: . The current version is now 7.3-<2102:2016-12-20>. A few of the notable changes are listed below. More details can be found in the commit log found on the ACT-R website at: , but only the changes in the actr7 branch are relevant to the current software. Production compilation has been updated to better handle buffers which are strict harvested. It is now safer because it will not compose productions which have a stuffed chunk between them for goal and imaginal type buffers in cases where strict harvesting was the reason the buffer cleared in the first production. It also produces better composed productions in situations were strict harvesting clears the buffer and the second production tests that it is empty. Previously the 'buffer empty' query from the second production was added to the composed production, but that prevents the composed production from competing with the first parent. Now that query is dropped from the new production. The mod-focus command now makes the modification during the goal-modification event instead of directly. This allows one to use goal-focus followed by a mod-focus without it complaining that the buffer is empty. Added an extra which implements a modification to the activation calculation for declarative memory. The purpose of the modifiction is to decrease the effect of noise on the activation of chunks as their activation increases with practice. The extras/adaptive-noise directory contains the code which adds a parameter called :uan (use adaptive-noise) that can be set to t to enable the mechanism. The documentation included with the extra describes how it works, and there are some sample graphs of the results of some very simple test cases showing the change when :uan is enabled. If you have any questions or problems with the new version please let me know. Dan From wkennedy at gmu.edu Wed Dec 21 12:10:52 2016 From: wkennedy at gmu.edu (William G Kennedy) Date: Wed, 21 Dec 2016 17:10:52 +0000 Subject: [ACT-R-users] ACT-R-users Digest, Vol 45, Issue 2 In-Reply-To: References: Message-ID: Dan, Memorable data to release a new version. I hope all?s well with you and the program. I will be teaching an cognitive modeling with ACT-R course next spring. May I request the tutorial answer file. (no immediate rush.) I?m familiar with the restrictions on letting it leak out. Thanks and have a wonderful break! // Bill On 12/21/16, 12:00 PM, "ACT-R-users on behalf of act-r-users-request at actr-server.hpc1.cs.cmu.edu" wrote: >Send ACT-R-users mailing list submissions to > act-r-users at act-r.psy.cmu.edu > >To subscribe or unsubscribe via the World Wide Web, visit > https://mailman.srv.cs.cmu.edu/mailman/listinfo/act-r-users >or, via email, send a message with subject or body 'help' to > act-r-users-request at act-r.psy.cmu.edu > >You can reach the person managing the list at > act-r-users-owner at act-r.psy.cmu.edu > >When replying, please edit your Subject line so it is more specific >than "Re: Contents of ACT-R-users digest..." > > >Today's Topics: > > 1. New ACT-R software release available (db30 at andrew.cmu.edu) > > >---------------------------------------------------------------------- > >Message: 1 >Date: Wed, 21 Dec 2016 10:18:55 -0500 >From: db30 at andrew.cmu.edu >To: act-r-users at ACTR-SERVER.HPC1.CS.cmu.edu >Subject: [ACT-R-users] New ACT-R software release available >Message-ID: <451C49CDDB5DF6374FC2A287 at actr6b.psy.cmu.edu> >Content-Type: text/plain; charset=us-ascii; FORMAT=flowed > > >A new version of the ACT-R 7 software is now available from the >ACT-R website: . The current >version is now 7.3-<2102:2016-12-20>. > >A few of the notable changes are listed below. More details >can be found in the commit log found on the ACT-R website at: >, but only the changes >in the actr7 branch are relevant to the current software. > >Production compilation has been updated to better handle buffers >which are strict harvested. It is now safer because it will not >compose productions which have a stuffed chunk between them for >goal and imaginal type buffers in cases where strict harvesting >was the reason the buffer cleared in the first production. It >also produces better composed productions in situations were >strict harvesting clears the buffer and the second production >tests that it is empty. Previously the 'buffer empty' query from >the second production was added to the composed production, but >that prevents the composed production from competing with the >first parent. Now that query is dropped from the new production. > >The mod-focus command now makes the modification during the >goal-modification event instead of directly. This allows one to use >goal-focus followed by a mod-focus without it complaining that the >buffer is empty. > >Added an extra which implements a modification to the activation >calculation for declarative memory. The purpose of the modifiction >is to decrease the effect of noise on the activation of chunks as >their activation increases with practice. The extras/adaptive-noise >directory contains the code which adds a parameter called :uan >(use adaptive-noise) that can be set to t to enable the mechanism. >The documentation included with the extra describes how it works, >and there are some sample graphs of the results of some very simple >test cases showing the change when :uan is enabled. > > >If you have any questions or problems with the new version please let >me know. > >Dan > > >------------------------------ > >Subject: Digest Footer > >_______________________________________________ >ACT-R-users mailing list >ACT-R-users at act-r.psy.cmu.edu >https://mailman.srv.cs.cmu.edu/mailman/listinfo/act-r-users > > >------------------------------ > >End of ACT-R-users Digest, Vol 45, Issue 2 >****************************************** From kevin.gluck at us.af.mil Thu Dec 22 13:28:07 2016 From: kevin.gluck at us.af.mil (GLUCK, KEVIN A DR-04 USAF AFMC 711 HPW/RHAC) Date: Thu, 22 Dec 2016 18:28:07 +0000 Subject: [ACT-R-users] software engineer position available Message-ID: <358C5929AD0E82468006A37A29D5811B60158E49@52ZHTX-D06-03C.area52.afnoapps.usaf.mil> The position at the link below is a contract position through Leidos, working on an exciting new line of applied research and development with our Cognitive Models and Agents branch at Wright-Patterson AFB. If you have the required knowledge, skills, experience, and interest, please apply at their website. If you know someone else who does, please forward for their awareness. Best regards and happy holidays. - Kevin - - - - - - - - - - - Kevin Gluck, PhD Principal Cognitive Scientist From kevin.gluck at us.af.mil Thu Dec 22 14:50:12 2016 From: kevin.gluck at us.af.mil (GLUCK, KEVIN A DR-04 USAF AFMC 711 HPW/RHAC) Date: Thu, 22 Dec 2016 19:50:12 +0000 Subject: [ACT-R-users] software engineer position available Message-ID: <358C5929AD0E82468006A37A29D5811B60158F25@52ZHTX-D06-03C.area52.afnoapps.usaf.mil> Sorry, all. Resending. This time with the link included! http://jobs.leidos.com/ShowJob/Id/989233/Software-Engineer/ -----Original Message----- From: GLUCK, KEVIN A DR-04 USAF AFMC 711 HPW/RHAC Sent: Thursday, December 22, 2016 1:28 PM To: ACT-R Users (act-r-users at act-r.psy.cmu.edu) ; Soar Users (soar-group at lists.sourceforge.net) Subject: software engineer position available The position at the link below is a contract position through Leidos, working on an exciting new line of applied research and development with our Cognitive Models and Agents branch at Wright-Patterson AFB. If you have the required knowledge, skills, experience, and interest, please apply at their website. If you know someone else who does, please forward for their awareness. Best regards and happy holidays. - Kevin - - - - - - - - - - - Kevin Gluck, PhD Principal Cognitive Scientist From tabuwalda at gmail.com Thu Dec 22 15:00:56 2016 From: tabuwalda at gmail.com (Trudy Buwalda) Date: Thu, 22 Dec 2016 21:00:56 +0100 Subject: [ACT-R-users] software engineer position available In-Reply-To: <358C5929AD0E82468006A37A29D5811B60158F25@52ZHTX-D06-03C.area52.afnoapps.usaf.mil> References: <358C5929AD0E82468006A37A29D5811B60158F25@52ZHTX-D06-03C.area52.afnoapps.usaf.mil> Message-ID: - Trudy On Thu, Dec 22, 2016 at 8:50 PM, GLUCK, KEVIN A DR-04 USAF AFMC 711 HPW/RHAC wrote: > Sorry, all. > > Resending. This time with the link included! > > http://jobs.leidos.com/ShowJob/Id/989233/Software-Engineer/ > > -----Original Message----- > From: GLUCK, KEVIN A DR-04 USAF AFMC 711 HPW/RHAC > Sent: Thursday, December 22, 2016 1:28 PM > To: ACT-R Users (act-r-users at act-r.psy.cmu.edu) < > act-r-users at act-r.psy.cmu.edu>; Soar Users (soar-group at lists.sourceforge. > net) > Subject: software engineer position available > > The position at the link below is a contract position through Leidos, > working on an exciting new line of applied research and development with > our Cognitive Models and Agents branch at Wright-Patterson AFB. If you > have the required knowledge, skills, experience, and interest, please apply > at their website. If you know someone else who does, please forward for > their awareness. > > Best regards and happy holidays. > > - Kevin > > - - - - - - - - - - - > Kevin Gluck, PhD > Principal Cognitive Scientist > > > > _______________________________________________ > ACT-R-users mailing list > ACT-R-users at act-r.psy.cmu.edu > https://mailman.srv.cs.cmu.edu/mailman/listinfo/act-r-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ion.juvina at wright.edu Tue Dec 27 11:22:26 2016 From: ion.juvina at wright.edu (Ion Juvina) Date: Tue, 27 Dec 2016 11:22:26 -0500 Subject: [ACT-R-users] New ACT-R software release available In-Reply-To: <451C49CDDB5DF6374FC2A287@actr6b.psy.cmu.edu> References: <451C49CDDB5DF6374FC2A287@actr6b.psy.cmu.edu> Message-ID: <8268198F-D2FD-4147-A07A-361EC3782B70@wright.edu> Hi Dan, Thanks for the update. I have a question: what is the theoretical rationale for adding the adaptive noise to declarative memories? As I understand it now, I think this mechanism is (at best) unnecessary. It is unnecessary because the effect of noise DOES already decrease with practice. Practice increases the difference in activation between relevant and irrelevant chunks, which decreases the confusion between the two kinds of chunks at retrieval. If noise is set in the appropriate range, its effect will diminish and approach zero with practice. Adding an explicit parameter to achieve something that emerges from existing mechanisms may even be detrimental considering the general recommendation to keep the number of parameters to a minimum. I may be wrong, which is why I want to hear the rationale and the data that would suggest such an addition to the architecture. I?d appreciate comments from the community as well. Thanks, ~ ion > On Dec 21, 2016, at 10:18 AM, db30 at andrew.cmu.edu wrote: > > > A new version of the ACT-R 7 software is now available from the > ACT-R website: . The current > version is now 7.3-<2102:2016-12-20>. > > A few of the notable changes are listed below. More details > can be found in the commit log found on the ACT-R website at: > , but only the changes > in the actr7 branch are relevant to the current software. > > Production compilation has been updated to better handle buffers > which are strict harvested. It is now safer because it will not > compose productions which have a stuffed chunk between them for > goal and imaginal type buffers in cases where strict harvesting > was the reason the buffer cleared in the first production. It > also produces better composed productions in situations were > strict harvesting clears the buffer and the second production > tests that it is empty. Previously the 'buffer empty' query from > the second production was added to the composed production, but > that prevents the composed production from competing with the > first parent. Now that query is dropped from the new production. > > The mod-focus command now makes the modification during the > goal-modification event instead of directly. This allows one to use > goal-focus followed by a mod-focus without it complaining that the > buffer is empty. > > Added an extra which implements a modification to the activation > calculation for declarative memory. The purpose of the modifiction > is to decrease the effect of noise on the activation of chunks as > their activation increases with practice. The extras/adaptive-noise > directory contains the code which adds a parameter called :uan > (use adaptive-noise) that can be set to t to enable the mechanism. > The documentation included with the extra describes how it works, > and there are some sample graphs of the results of some very simple > test cases showing the change when :uan is enabled. > > > If you have any questions or problems with the new version please let > me know. > > Dan > _______________________________________________ > ACT-R-users mailing list > ACT-R-users at act-r.psy.cmu.edu > https://urldefense.proofpoint.com/v2/url?u=https-3A__mailman.srv.cs.cmu.edu_mailman_listinfo_act-2Dr-2Dusers&d=CwICAg&c=3buyMx9JlH1z22L_G5pM28wz_Ru6WjhVHwo-vpeS0Gk&r=vtA7YXGBFnFocyRUneK5pkvCTPZEpjEwzP4TURDawmo&m=wAdy6WChw5TkZMFPkFE8jcYsmZfwF2YmsHAsT47QsHc&s=RnQ_nPNyv_IVaNhYKHaMOXOzX4Bs7E8yUgh1ikQdv8M&e= From cl at cmu.edu Tue Dec 27 17:00:11 2016 From: cl at cmu.edu (Christian Lebiere) Date: Tue, 27 Dec 2016 17:00:11 -0500 Subject: [ACT-R-users] New ACT-R software release available In-Reply-To: <8268198F-D2FD-4147-A07A-361EC3782B70@wright.edu> References: <451C49CDDB5DF6374FC2A287@actr6b.psy.cmu.edu> <8268198F-D2FD-4147-A07A-361EC3782B70@wright.edu> Message-ID: CMU is closed for the Holidays and it might be a while before Dan replies, so I will take a stab... The new mechanism came out of conversations between Dan and I which go back to my thesis work on a lifetime model of cognitive arithmetic (short version here and long version here ). It turns out that, under most realistic assumptions, practice does *not* increase the difference in activation between relevant and irrelevant chunks. If you assume that the relative frequency of knowledge does not substantially change over time (e.g., the *relative* frequency of correct facts such as 3+4=7 vs 3+5=8 remains the same as fixed by the environment) then the difference in activation between the two chunks remains the same even though both become more active with practice. That means that the probability of commission errors (mistakenly retrieving one for the other) remains constant, which is clearly not inline with the longitudinal data. My thesis mentioned the theoretical argument that if the noise were to decrease as a function of the log of practice then the odds of commission errors would decrease as a power law of practice, which roughly seems to be as indicated by the data. That is not the only possible solution to the conundrum. The model actually used associative learning to achieve that effect, and while it sort of worked it was also brittle and exhibited a number of quirks. Besides the mechanism is now deprecated so if you have a new candidate mechanism it would be interesting to see if it works better. Another possibility would be if the matching penalty increased with practice, but there are some issues with getting that to work in the right conditions. Note that idea of getting the noise to decrease with practice has interesting connotations. From a rational analysis point of view, decreasing the noise as a log of practice has been shown in some settings (e.g., simulated annealing in Boltzmann networks) to yield an optimal schedule for exploration-exploitation tradeoff. From a neural point of view, an obvious interpretation would consist of holding the noise magnitude constant while the size of the weights encoding the memory grows with amount of practice (e.g., using superposition learning yielding random walk-type dynamics). On the latter point, an alternative ACT-R implementation would consist of moving the noise term from a separate additive term in the activation equation on a par with history (bll) and matching (al and pm) terms to an additive term inside the log of the base-level history term. That way, the log function applied to the sum over past references would quash it as required. As I recall, Dan thought that it was a more disruptive change and decided to include the new noise as a separate term instead at this initial stage. Its parametrization and inclusion as an extra rather than as a default was meant to let the community experiment with it and decide whether it made sense before considering further deployment. Happy holidays everyone. Christian On Tue, Dec 27, 2016 at 11:22 AM, Ion Juvina wrote: > Hi Dan, > > Thanks for the update. > > I have a question: what is the theoretical rationale for adding the > adaptive noise to declarative memories? > > As I understand it now, I think this mechanism is (at best) unnecessary. > > It is unnecessary because the effect of noise DOES already decrease with > practice. Practice increases the difference in activation between relevant > and irrelevant chunks, which decreases the confusion between the two kinds > of chunks at retrieval. If noise is set in the appropriate range, its > effect will diminish and approach zero with practice. > > Adding an explicit parameter to achieve something that emerges from > existing mechanisms may even be detrimental considering the general > recommendation to keep the number of parameters to a minimum. > > I may be wrong, which is why I want to hear the rationale and the data > that would suggest such an addition to the architecture. > > I?d appreciate comments from the community as well. > > Thanks, > ~ ion > > > > > On Dec 21, 2016, at 10:18 AM, db30 at andrew.cmu.edu wrote: > > > > > > A new version of the ACT-R 7 software is now available from the > > ACT-R website: 3A__act-2Dr.psy.cmu.edu_software_&d=CwICAg&c=3buyMx9JlH1z22L_G5pM28wz_ > Ru6WjhVHwo-vpeS0Gk&r=vtA7YXGBFnFocyRUneK5pkvCTPZEpjEwzP4TURDawmo&m= > wAdy6WChw5TkZMFPkFE8jcYsmZfwF2YmsHAsT47QsHc&s= > 8bxD85GR4gpHnN3lrnNM5SgWDsPGYOLFrlvMrkpwv3I&e= >. The current > > version is now 7.3-<2102:2016-12-20>. > > > > A few of the notable changes are listed below. More details > > can be found in the commit log found on the ACT-R website at: > > 3A__act-2Dr.psy.cmu.edu_log_actr6log.txt&d=CwICAg&c= > 3buyMx9JlH1z22L_G5pM28wz_Ru6WjhVHwo-vpeS0Gk&r= > vtA7YXGBFnFocyRUneK5pkvCTPZEpjEwzP4TURDawmo&m= > wAdy6WChw5TkZMFPkFE8jcYsmZfwF2YmsHAsT47QsHc&s= > EKW0s8sMl4vPuMe6ewyS2uJvdAAcfsZA9UOlE2y7zO4&e= >, but only the changes > > in the actr7 branch are relevant to the current software. > > > > Production compilation has been updated to better handle buffers > > which are strict harvested. It is now safer because it will not > > compose productions which have a stuffed chunk between them for > > goal and imaginal type buffers in cases where strict harvesting > > was the reason the buffer cleared in the first production. It > > also produces better composed productions in situations were > > strict harvesting clears the buffer and the second production > > tests that it is empty. Previously the 'buffer empty' query from > > the second production was added to the composed production, but > > that prevents the composed production from competing with the > > first parent. Now that query is dropped from the new production. > > > > The mod-focus command now makes the modification during the > > goal-modification event instead of directly. This allows one to use > > goal-focus followed by a mod-focus without it complaining that the > > buffer is empty. > > > > Added an extra which implements a modification to the activation > > calculation for declarative memory. The purpose of the modifiction > > is to decrease the effect of noise on the activation of chunks as > > their activation increases with practice. The extras/adaptive-noise > > directory contains the code which adds a parameter called :uan > > (use adaptive-noise) that can be set to t to enable the mechanism. > > The documentation included with the extra describes how it works, > > and there are some sample graphs of the results of some very simple > > test cases showing the change when :uan is enabled. > > > > > > If you have any questions or problems with the new version please let > > me know. > > > > Dan > > _______________________________________________ > > ACT-R-users mailing list > > ACT-R-users at act-r.psy.cmu.edu > > https://urldefense.proofpoint.com/v2/url?u=https-3A__ > mailman.srv.cs.cmu.edu_mailman_listinfo_act-2Dr-2Dusers&d=CwICAg&c= > 3buyMx9JlH1z22L_G5pM28wz_Ru6WjhVHwo-vpeS0Gk&r= > vtA7YXGBFnFocyRUneK5pkvCTPZEpjEwzP4TURDawmo&m= > wAdy6WChw5TkZMFPkFE8jcYsmZfwF2YmsHAsT47QsHc&s=RnQ_nPNyv_ > IVaNhYKHaMOXOzX4Bs7E8yUgh1ikQdv8M&e= > > > _______________________________________________ > ACT-R-users mailing list > ACT-R-users at act-r.psy.cmu.edu > https://mailman.srv.cs.cmu.edu/mailman/listinfo/act-r-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From i.milovanovic at uu.nl Wed Dec 28 12:04:37 2016 From: i.milovanovic at uu.nl (Milovanovic, I. (Ivica)) Date: Wed, 28 Dec 2016 17:04:37 +0000 Subject: [ACT-R-users] Visual vs. Imaginal Module Message-ID: Hello everyone, I?m currently writing a paper about a high-level language for writing ACT-R models. What confuses me is the role of visual (or other perceptual) and imaginal modules when encoding tasks. The ACT-R User Manual seems clear: "The basic assumption behind the vision module is that the chunks placed into the visual buffer as a result of an attention operation are episodic representations of the objects in the visual scene. Thus, a chunk with the value "3" represents a memory of the character "3" available via the eyes, not the semantic THREE used in arithmetic?a declarative retrieval would be necessary to make that mapping." If I understood well, this means that there should be a chunk in the declarative memory, e.g. ?(three ISA whatever type number visual-value ?3?)', representing symbol of the number 3. Then there may be a general production such as ?if there is a chunk with value =n in the visual buffer, retrieve a chunk with visual-value =n?. Another production may harvest retrieved chunk and send a request to imaginal buffer to modify (or create if there is nothing) its chunk by adding a slot with retrieved value, e.g. ?three?. Finally, there may be a production that, for example, reads value ?three' from the imaginal buffer and sends a retrieval request for its square, if there is a goal to square a number. However, in ?Human Symbol Manipulation Within an Integrated Cognitive Architecture?, Anderson writes about visual module "holding the representation of an equation such as 3x ? 5 = 7? while imaginal module "holds a current mental representation of the problem, e.g. 3x = 12?. The latter is clear, but the former seems to contradict the above quote from the manual saying that visual module can only hold ?episodic representations of the objects in the visual scene". Later in the paper he continues attributing the entire encoding process to the visual module only: ?Visual: On both days four encoding operations take place, which each take 300 msec. Each encoding has the resolution to pick up two terms in the expression. Therefore, the first en- codes Exp = 38, where Exp denotes what cannot be analyzed. The second analyzes this into Exp + 3, the third into 5 * Exp, and the final encodes the x." I can?t see how anything more than a single word (as defined by 'add-word-characters?) can be in the visual buffer at a time. Consequently, I can?t see how to recreate the equation solving model from the Anderson?s paper without utilising imaginal module for encoding, besides visual. Could someone please clarify this? Also, how can the activities of visual and imaginal modules be distinguished in fMRI experiments, as both modules seem to be mapped to the parietal region? All the best and happy holidays, Ivica Milovanovic PhD Candidate Utrecht University, Netherlands From ja0s at andrew.cmu.edu Wed Dec 28 13:01:13 2016 From: ja0s at andrew.cmu.edu (john) Date: Wed, 28 Dec 2016 13:01:13 -0500 Subject: [ACT-R-users] Visual vs. Imaginal Module In-Reply-To: References: Message-ID: Ivica: I would go with the manual which is presumably current. I took a look at the old model from 2005, which did not make predictions for visual activity, and I find that I created my own special purpose module for handing visual equations. If you would like to see a current model that does use the current ACT-R visual module, processes equation-like material, and makes predictions for fusiform activity, I would suggest the following two papers which use essentlally the same model -- and those models are available with the supplementary material at the web sites: Anderson, J. R. & Fincham, J. M. (2014). Extending Problem-Solving Procedures. /Cognitive Psychology/, Nov;74:1-31. http://act-r.psy.cmu.edu/?post_type=publications&p=16145 Tenison, C., Fincham, J.M., & Anderson, J.R. (2016). Phases of Learning: How Skill Acquisition Impacts Cognitive Processing. Cognitive Psychology, 87, 1-28. http://act-r.psy.cmu.edu/?post_type=publications&p=19047 On 12/28/16 12:04 PM, Milovanovic, I. (Ivica) wrote: > Hello everyone, > > I?m currently writing a paper about a high-level language for writing ACT-R models. What confuses me is the role of visual (or other perceptual) and imaginal modules when encoding tasks. The ACT-R User Manual seems clear: > > "The basic assumption behind the vision module is that the chunks placed into the visual buffer as a result of an attention operation are episodic representations of the objects in the visual scene. Thus, a chunk with the value "3" represents a memory of the character "3" available via the eyes, not the semantic THREE used in arithmetic?a declarative retrieval would be necessary to make that mapping." > > If I understood well, this means that there should be a chunk in the declarative memory, e.g. ?(three ISA whatever type number visual-value ?3?)', representing symbol of the number 3. Then there may be a general production such as ?if there is a chunk with value =n in the visual buffer, retrieve a chunk with visual-value =n?. Another production may harvest retrieved chunk and send a request to imaginal buffer to modify (or create if there is nothing) its chunk by adding a slot with retrieved value, e.g. ?three?. Finally, there may be a production that, for example, reads value ?three' from the imaginal buffer and sends a retrieval request for its square, if there is a goal to square a number. > > However, in ?Human Symbol Manipulation Within an Integrated Cognitive Architecture?, Anderson writes about visual module "holding the representation of an equation such as 3x ? 5 = 7? while imaginal module "holds a current mental representation of the problem, e.g. 3x = 12?. The latter is clear, but the former seems to contradict the above quote from the manual saying that visual module can only hold ?episodic representations of the objects in the visual scene". Later in the paper he continues attributing the entire encoding process to the visual module only: > > ?Visual: On both days four encoding operations take place, which each take 300 msec. Each encoding has the resolution to pick up two terms in the expression. Therefore, the first en- codes Exp = 38, where Exp denotes what cannot be analyzed. The second analyzes this into Exp + 3, the third into 5 * Exp, and the final encodes the x." > > I can?t see how anything more than a single word (as defined by 'add-word-characters?) can be in the visual buffer at a time. Consequently, I can?t see how to recreate the equation solving model from the Anderson?s paper without utilising imaginal module for encoding, besides visual. > > Could someone please clarify this? Also, how can the activities of visual and imaginal modules be distinguished in fMRI experiments, as both modules seem to be mapped to the parietal region? > > All the best and happy holidays, > > > Ivica Milovanovic > PhD Candidate > Utrecht University, Netherlands > > _______________________________________________ > ACT-R-users mailing list > ACT-R-users at act-r.psy.cmu.edu > https://mailman.srv.cs.cmu.edu/mailman/listinfo/act-r-users -- John R. Anderson Richard King Mellon Professor of Psychology and Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Office: Baker Hall 345D Phone: 412-417-7008 Fax: 412-268-2844 email: ja at cmu.edu URL: http://act.psy.cmu.edu/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From i.milovanovic at uu.nl Wed Dec 28 17:06:01 2016 From: i.milovanovic at uu.nl (Milovanovic, I. (Ivica)) Date: Wed, 28 Dec 2016 22:06:01 +0000 Subject: [ACT-R-users] Visual vs. Imaginal Module In-Reply-To: References: Message-ID: John, Thank you for the quick answer! Will metacognitive module, introduced in one of the papers you linked, be included in a future version of ACT-R? All the best, Ivica Milovanovic PhD Candidate Utrecht University, Netherlands On 28 Dec 2016, at 19:01, john > wrote: Ivica: I would go with the manual which is presumably current. I took a look at the old model from 2005, which did not make predictions for visual activity, and I find that I created my own special purpose module for handing visual equations. If you would like to see a current model that does use the current ACT-R visual module, processes equation-like material, and makes predictions for fusiform activity, I would suggest the following two papers which use essentlally the same model -- and those models are available with the supplementary material at the web sites: Anderson, J. R. & Fincham, J. M. (2014). Extending Problem-Solving Procedures. Cognitive Psychology, Nov;74:1-31. http://act-r.psy.cmu.edu/?post_type=publications&p=16145 Tenison, C., Fincham, J.M., & Anderson, J.R. (2016). Phases of Learning: How Skill Acquisition Impacts Cognitive Processing. Cognitive Psychology, 87, 1-28. http://act-r.psy.cmu.edu/?post_type=publications&p=19047 On 12/28/16 12:04 PM, Milovanovic, I. (Ivica) wrote: Hello everyone, I?m currently writing a paper about a high-level language for writing ACT-R models. What confuses me is the role of visual (or other perceptual) and imaginal modules when encoding tasks. The ACT-R User Manual seems clear: "The basic assumption behind the vision module is that the chunks placed into the visual buffer as a result of an attention operation are episodic representations of the objects in the visual scene. Thus, a chunk with the value "3" represents a memory of the character "3" available via the eyes, not the semantic THREE used in arithmetic?a declarative retrieval would be necessary to make that mapping." If I understood well, this means that there should be a chunk in the declarative memory, e.g. ?(three ISA whatever type number visual-value ?3?)', representing symbol of the number 3. Then there may be a general production such as ?if there is a chunk with value =n in the visual buffer, retrieve a chunk with visual-value =n?. Another production may harvest retrieved chunk and send a request to imaginal buffer to modify (or create if there is nothing) its chunk by adding a slot with retrieved value, e.g. ?three?. Finally, there may be a production that, for example, reads value ?three' from the imaginal buffer and sends a retrieval request for its square, if there is a goal to square a number. However, in ?Human Symbol Manipulation Within an Integrated Cognitive Architecture?, Anderson writes about visual module "holding the representation of an equation such as 3x ? 5 = 7? while imaginal module "holds a current mental representation of the problem, e.g. 3x = 12?. The latter is clear, but the former seems to contradict the above quote from the manual saying that visual module can only hold ?episodic representations of the objects in the visual scene". Later in the paper he continues attributing the entire encoding process to the visual module only: ?Visual: On both days four encoding operations take place, which each take 300 msec. Each encoding has the resolution to pick up two terms in the expression. Therefore, the first en- codes Exp = 38, where Exp denotes what cannot be analyzed. The second analyzes this into Exp + 3, the third into 5 * Exp, and the final encodes the x." I can?t see how anything more than a single word (as defined by 'add-word-characters?) can be in the visual buffer at a time. Consequently, I can?t see how to recreate the equation solving model from the Anderson?s paper without utilising imaginal module for encoding, besides visual. Could someone please clarify this? Also, how can the activities of visual and imaginal modules be distinguished in fMRI experiments, as both modules seem to be mapped to the parietal region? All the best and happy holidays, Ivica Milovanovic PhD Candidate Utrecht University, Netherlands _______________________________________________ ACT-R-users mailing list ACT-R-users at act-r.psy.cmu.edu https://mailman.srv.cs.cmu.edu/mailman/listinfo/act-r-users -- John R. Anderson Richard King Mellon Professor of Psychology and Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Office: Baker Hall 345D Phone: 412-417-7008 Fax: 412-268-2844 email: ja at cmu.edu URL: http://act.psy.cmu.edu/ _______________________________________________ ACT-R-users mailing list ACT-R-users at act-r.psy.cmu.edu https://mailman.srv.cs.cmu.edu/mailman/listinfo/act-r-users -------------- next part -------------- An HTML attachment was scrubbed... URL: