<html><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div>Hello, </div><div> </div>I also used the audition module of ACT-R to model rehearsal based on phonological loop in N-Back. <div><br></div><div>this paper describes the model: </div><div><br></div><div><span class="Apple-style-span" style="font-family: 'Times New Roman'; font-size: 16px; ">Juvina, I., & Taatgen, N. A. (2007). <i>Modeling control strategies in the N-Back task.</i><span style="font-style: normal; "> Proceedings of the eight International Conference on Cognitive Modeling (pp. 73-78). New York: Psychology Press.</span></span></div><div><br></div><div>the code of the model is available to download from my webpage: <a href="http://www.contrib.andrew.cmu.edu/~ijuvina/Publications.htm">http://www.contrib.andrew.cmu.edu/~ijuvina/Publications.htm</a></div><div><br></div><div><br></div><div>~ ion</div><div> </div><div><div><br><div apple-content-edited="true"> <span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><div><div><div>Ion Juvina, PhD</div><div>Research Fellow </div><div>Department of Psychology </div>Baker Hall 336A<br><div>Carnegie Mellon University </div><div>5000 Forbes Ave.</div><div>Pittsburgh, PA 15213</div><div><br></div><div>telephone:<span class="Apple-converted-space"> </span><span class="Apple-tab-span" style="white-space: pre; "> </span>412-268-2837</div><div>email:<span class="Apple-converted-space"> </span><span class="Apple-tab-span" style="white-space: pre; "> </span><a href="mailto:ijuvina@cmu.edu">ijuvina@cmu.edu</a></div><div>webpage:<span class="Apple-tab-span" style="white-space: pre; "> </span><a href="http://www.andrew.cmu.edu/user/ijuvina/index.htm">http://www.andrew.cmu.edu/user/ijuvina/index.htm</a></div><div><br></div><div>Download my most recent article at: <a href="http://dx.doi.org/10.1016/j.actpsy.2009.03.002">http://dx.doi.org/10.1016/j.actpsy.2009.03.002</a></div><div>or email me for a free reprint </div></div></div></div></div></span></div></span></div></span> </div><br><div><div>On Aug 11, 2009, at 4:10 PM, Richard M Young wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div>Hello Bonnie,<br><br>Some years ago, following on from David Huss's work with Mike Byrne, <br>Martin Greaves and I played around with simple models using the <br>audicon to deal with lists of isolated words in a short-term memory <br>experiment. At least in our hands, the audicon was assumed to hold <br>"words" in some unspecified phonemic/phonological/articulatory code, <br>which therefore required an access to DM/LTM in order to retrieve the <br>word as a lexical item.<br><br>As I remember it -- which is not well -- we found it a bit clunky to <br>use the audicon to deal with ephemeral material spread out in time. <br>With no disrespect to those who had worked on it, I think it's fair <br>to say that not much thought had been given to the underlying design <br>of a transducer dealing with speech sounds. For example, I *think* <br>we had to add an END signal to indicate the silence following the <br>last item in a list.<br><br>Martin later build partial models of the running memory span task <br>using a similar approach, and included them in his PhD thesis. He <br>may be able to add to what I'm saying here.<br><br>Word recognition in speech is a well-studied area in cognitive <br>psychology, and there are some good (ad hoc) models around, and I <br>believe a reasonable amount of consensus. It had crossed my mind <br>from time to time that it would be an interesting area to bring into <br>contact with Act-R. There are of course Act-R models of the lexical <br>retrieval stage itself.<br><br>Good luck!<br><br>~ Richard<br><br>At 11:18 -0400 11/8/09, Bonnie John wrote:<br><blockquote type="cite">Folks,<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Anyone have an models using ACT-R's Audition Module to listen to and<br></blockquote><blockquote type="cite">comprehend continuous speech?<br></blockquote><blockquote type="cite">If not continuous speech, how about short phrases or anything other than<br></blockquote><blockquote type="cite">tones?<br></blockquote><blockquote type="cite">Any experience using the Audition Module at all?<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">I'm asking because I'd like to make models of ACT-R using the JAWS<br></blockquote><blockquote type="cite">screen reader to navigate a web site and compare it to ACT-R models of<br></blockquote><blockquote type="cite">visually navigating a web site. I'd like to read papers or speak with<br></blockquote><blockquote type="cite">someone who has had experience with the Audition Module to get a<br></blockquote><blockquote type="cite">heads-up on peaks and pitfalls.<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">I did browse the Publications page, but could only find papers that use<br></blockquote><blockquote type="cite">vision (but I could have missed the listening ones - apologies if I did).<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Thanks,<br></blockquote><blockquote type="cite">Bonnie<br></blockquote>_______________________________________________<br>ACT-R-users mailing list<br><a href="mailto:ACT-R-users@act-r.psy.cmu.edu">ACT-R-users@act-r.psy.cmu.edu</a><br>http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users<br><br></div></blockquote></div><br></div></div></body></html>