[ACT-R-users] Visual vs. Imaginal Module

Milovanovic, I. (Ivica) i.milovanovic at uu.nl
Wed Dec 28 17:06:01 EST 2016


John,

Thank you for the quick answer!

Will metacognitive module, introduced in one of the papers you linked, be included in a future version of ACT-R?

All the best,

Ivica Milovanovic
PhD Candidate
Utrecht University, Netherlands

On 28 Dec 2016, at 19:01, john <ja0s at andrew.cmu.edu<mailto:ja0s at andrew.cmu.edu>> wrote:


Ivica:

I would go with the manual which is presumably current.  I took a look at the old model from 2005, which did not make predictions for visual activity, and I find that I created my own special purpose module for handing visual equations.

If you would like to see a current model that does use the current ACT-R visual module, processes equation-like material, and makes predictions for fusiform activity, I would suggest the following two papers which use essentlally the same model -- and those models are available with the supplementary material at the web sites:

Anderson, J. R. & Fincham, J. M. (2014). Extending Problem-Solving Procedures. Cognitive Psychology, Nov;74:1-31.
http://act-r.psy.cmu.edu/?post_type=publications&p=16145

Tenison, C., Fincham, J.M., & Anderson, J.R. (2016). Phases of Learning: How Skill Acquisition Impacts Cognitive Processing. Cognitive Psychology, 87, 1-28.
http://act-r.psy.cmu.edu/?post_type=publications&p=19047


On 12/28/16 12:04 PM, Milovanovic, I. (Ivica) wrote:

Hello everyone,

I’m currently writing a paper about a high-level language for writing ACT-R models. What confuses me is the role of visual (or other perceptual) and imaginal modules when encoding tasks. The ACT-R User Manual seems clear:

"The basic assumption behind the vision module is that the chunks placed into the visual buffer as a result of an attention operation are episodic representations of the objects in the visual scene. Thus, a chunk with the value "3" represents a memory of the character "3" available via the eyes, not the semantic THREE used in arithmetic—a declarative retrieval would be necessary to make that mapping."

If I understood well, this means that there should be a chunk in the declarative memory, e.g. ‘(three ISA whatever type number visual-value “3”)', representing symbol of the number 3. Then there may be a general production such as ‘if there is a chunk with value =n in the visual buffer, retrieve a chunk with  visual-value =n’. Another production may harvest retrieved chunk and send a request to imaginal buffer to modify (or create if there is nothing) its chunk by adding a slot with retrieved value, e.g. ’three’. Finally, there may be a production that, for example, reads value ‘three' from the imaginal buffer and sends a retrieval request for its square, if there is a goal to square a number.

However, in “Human Symbol Manipulation Within an Integrated Cognitive Architecture”, Anderson writes about visual module "holding the representation of an equation such as 3x – 5 = 7” while imaginal module "holds a current mental representation of the problem, e.g. 3x = 12”. The latter is clear, but the former seems to contradict the above quote from the manual saying that visual module can only hold “episodic representations of the objects in the visual scene". Later in the paper he continues attributing the entire encoding process to the visual module only:

“Visual: On both days four encoding operations take place, which each take 300 msec. Each encoding has the resolution to pick up two terms in the expression. Therefore, the first en- codes Exp = 38, where Exp denotes what cannot be analyzed. The second analyzes this into Exp + 3, the third into 5 * Exp, and the final encodes the x."

I can’t see how anything more than a single word (as defined by 'add-word-characters’) can be in the visual buffer at a time. Consequently, I can’t see how to recreate the equation solving model from the Anderson’s paper without utilising imaginal module for encoding, besides visual.

Could someone please clarify this? Also, how can the activities of visual and imaginal modules be distinguished in fMRI experiments, as both modules seem to be mapped to the parietal region?

All the best and happy holidays,


Ivica Milovanovic
PhD Candidate
Utrecht University, Netherlands

_______________________________________________
ACT-R-users mailing list
ACT-R-users at act-r.psy.cmu.edu<mailto:ACT-R-users at act-r.psy.cmu.edu>
https://mailman.srv.cs.cmu.edu/mailman/listinfo/act-r-users




--
John R. Anderson
Richard King Mellon Professor
of Psychology and Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
Office: Baker Hall 345D
Phone: 412-417-7008
Fax:     412-268-2844
email: ja at cmu.edu<mailto:ja at cmu.edu>
URL:  http://act.psy.cmu.edu/


_______________________________________________
ACT-R-users mailing list
ACT-R-users at act-r.psy.cmu.edu<mailto:ACT-R-users at act-r.psy.cmu.edu>
https://mailman.srv.cs.cmu.edu/mailman/listinfo/act-r-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/act-r-users/attachments/20161228/3f803770/attachment.html>


More information about the ACT-R-users mailing list