<html><head><base href="x-msg://171/"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Jerry,<div><br></div><div>When developing a new, more detailed retrieval mechanism for ACT-R (called RACE/A, see also <a href="http://www.ai.rug.nl/~leendert/race">www.ai.rug.nl/~leendert/race</a>), we ran into similar issues. That is, we were interested in multilink spreading activation to compute the activation levels of multiple chunks *during* a memory retrieval. In addition, for our model of picture-word interference (Van Maanen & Van Rijn, 2007 CSR) we also required the capability of spreading activation from a stimulus (a word or a picture) to multiple chunks, analoguous in your airspeed/arispeed example. To achieve this, we used the retrieval-set-hook to compute activations and add-sji to manually set the spreading activation between the chunks. This is similar to what Dan suggested.</div><div><br></div><div>One of the virtues of our approach is that during retrieval of one chunk other, associated (set with add-sji), chunks will also increase in activation, allowing for the interactions between pos/letter information that you are interested in. One of the drawbacks is that computations will slow down tremendously, as you said. We haven't tested RACE/A on large DMs, but with a recalculation of every activation value every 5ms model runs take long, even for small DM sizes.</div><div><br></div><div>Leendert</div><div><br><div><div>On 14Sep 2009, at 19:45 , Ball, Jerry T Civ USAF AFMC 711 HPW/RHAC wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><span class="Apple-style-span" style="border-collapse: separate; font-family: Helvetica; font-size: medium; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; "><div lang="EN-US" link="blue" vlink="purple"><div class="Section1"><div style="margin-top: 0in; margin-right: 0in; margin-bottom: 0.0001pt; margin-left: 0in; font-size: 12pt; font-family: 'Times New Roman'; "><font size="2" face="Courier New"><span style="font-size: 10pt; font-family: 'Courier New'; ">We are in the process of mapping the linguistics representations that are generated by our language comprehension model into a situation model based semantic representation. We are trying to do this in a representationally reasonable way within the ACT-R architecture. The problem we face is the many-to-many mapping between words and concepts. Individual words may map to multiple concepts, and individual concepts may may to multiple words. Given this many-to-many mapping, we would like to use mapping chunks to map from words to concepts. The mapping chunks would encode a single mapping relationship (e.g. a separate mapping chunk to map from the word "bank" to the financial institution concept; from the word "bank" to the river bank concept; from the concept dog to the word "dog"; from the concept dog to the word "canine"). When processing a word, the goal is to retrieve the contextually relevant concept. We would like to accomplish this in a single retrieval, however, we do not know how to do this given the single-level activation spreading mechanism in ACT-R. Since there is no direct link between a word and a concept if mapping chunks are used (i.e. there is no slot in the concept that contains the word), the word will not spread activation to the concept. Instead, given the use of mapping chunks, it appears that two retrievals are needed: 1) given the word, retrieve a mapping chunk, and 2) given a mapping chunk, retrieve a concept. Since our model of language comprehension is already slower than humans at processing language, any extra retrievals are problematic. In fact, we have already eliminated an extra retrieval in determining the part-of-speech of a word. Previously, two retrievals were needed: 1) retrieve the word corresponding to the perceptual input, and 2) given the word (and context) retrieve the part-of-speech of the word. While we were successful in eliminating a retrieval, the resulting word-pos chunks contain a mixture of word form information (e.g. the letters and trigrams in the word) and pos information. Even so, they do not yet contain any representation of phonetic, phonemic, syllabic or morphemic information. With just letter and trigram information, long words contain many slots. Ideally, we would like to represent letter and trigram information independently of each other and POS information (allowing them to interact in retrieving a word), but given the single-level activation spreading mechanism in ACT-R doing so would necessitate multiple independent retrievals, which would fail to capture the interaction of letter and trigram information that leads to successful retrievals of words in the face of variability in the perceptual form (e.g. "arispeed" should retrieve "airspeed").<o:p></o:p></span></font></div><div style="margin-top: 0in; margin-right: 0in; margin-bottom: 0.0001pt; margin-left: 0in; font-size: 12pt; font-family: 'Times New Roman'; "><font size="2" face="Courier New"><span style="font-size: 10pt; font-family: 'Courier New'; "> <o:p></o:p></span></font></div><div style="margin-top: 0in; margin-right: 0in; margin-bottom: 0.0001pt; margin-left: 0in; font-size: 12pt; font-family: 'Times New Roman'; "><font size="2" face="Courier New"><span style="font-size: 10pt; font-family: 'Courier New'; "> The fall back for mapping words to concepts is to embed all the possible concepts as slot values in a word and vice versa. While we consider this a representationally problematic solution -- word and concept chunks will wind up needing many extra slots, we do not know how else to work around the single-level activation spread in ACT-R.<o:p></o:p></span></font></div><div style="margin-top: 0in; margin-right: 0in; margin-bottom: 0.0001pt; margin-left: 0in; font-size: 12pt; font-family: 'Times New Roman'; "><font size="2" face="Courier New"><span style="font-size: 10pt; font-family: 'Courier New'; "> <o:p></o:p></span></font></div><div style="margin-top: 0in; margin-right: 0in; margin-bottom: 0.0001pt; margin-left: 0in; font-size: 12pt; font-family: 'Times New Roman'; "><font size="2" face="Courier New"><span style="font-size: 10pt; font-family: 'Courier New'; "> The primary empirical argument against the need for multi-level activation spread in ACT-R is based on studies which show no activation from words like "bull" to words like "milk", even though "bull" activates "cow" and "cow" activates "milk". Even if it is true that there are no instances of "indirect" activation from "bull" to "milk", this does not rule out the need for multi-level activation spread. There is a hidden assumption that "cow" and "bull" are directly associated, and that "cow" and "milk" are also directly associated. Such direct associations may seem reasonable in small-scale models addressing specific spreading activation phenomena, but they are questionable in a larger-scale model. Do we really want to include all the direct associates of "cow" as slot values in the "cow" chunk, and do the same for all other chunks?<o:p></o:p></span></font></div><div style="margin-top: 0in; margin-right: 0in; margin-bottom: 0.0001pt; margin-left: 0in; font-size: 12pt; font-family: 'Times New Roman'; "><font size="2" face="Courier New"><span style="font-size: 10pt; font-family: 'Courier New'; "> <o:p></o:p></span></font></div><div style="margin-top: 0in; margin-right: 0in; margin-bottom: 0.0001pt; margin-left: 0in; font-size: 12pt; font-family: 'Times New Roman'; "><font size="2" face="Courier New"><span style="font-size: 10pt; font-family: 'Courier New'; "> We understand that the inclusion of a multi-level activation spreading mechanism in ACT-R would be computationally explosive. However, we would like to have the capability to explore use of such a mechanism and to look for ways to keep it computationally tractable. We have already dealt with the problem of computational explosion in our word retrieval mechanism. Originally, we attempted to use a "soft constraint" retrieval mechanism for words. All words in DM were candidates for retrieval--the most highly activated word being retrieved. With just 2500 words in DM, the activation calculations slowed the model down considerably. To manage retrievals in a tractable manner we implemented a disjunctive retrieval capability combined with a new perceptual span mechanism -- the model first tries a hard-constraint retrieval on the entire perceptual span (which is larger than a word) using the "get-chunk" function (and chop-string under the covers). If get-chunk succeeds (indicating that there is a chunk in DM corresponding to the entire perceptual span) a retrieval is constructed using the entire perceptual span as a hard constraint to retrieve the corresponding multi-word unit in DM, if this fails, the model backs-off and uses the first space delimited word (using chop-string) in the perceptual span to check for a corresponding word in DM -- if a match is found with get-chunk, a retrieval is constructed to retrieve the word. If all else fails, we construct a retrieval that imposes a hard constraint on the first letter (this is less than ideal, but a reasonable compromise). The overall effect is a (nearly) soft-constraint retrieval implemented in a computationally tractable way.<o:p></o:p></span></font></div><div style="margin-top: 0in; margin-right: 0in; margin-bottom: 0.0001pt; margin-left: 0in; font-size: 12pt; font-family: 'Times New Roman'; "><font size="2" face="Courier New"><span style="font-size: 10pt; font-family: 'Courier New'; "> <o:p></o:p></span></font></div><div style="margin-top: 0in; margin-right: 0in; margin-bottom: 0.0001pt; margin-left: 0in; font-size: 12pt; font-family: 'Times New Roman'; "><font size="2" face="Courier New"><span style="font-size: 10pt; font-family: 'Courier New'; "> A similar capability to effect multi-level activation spread in a computationally tractable manner would be highly desirable.<o:p></o:p></span></font></div><div style="margin-top: 0in; margin-right: 0in; margin-bottom: 0.0001pt; margin-left: 0in; font-size: 12pt; font-family: 'Times New Roman'; "><font size="2" face="Courier New"><span style="font-size: 10pt; font-family: 'Courier New'; "><o:p> </o:p></span></font></div><div style="margin-top: 0in; margin-right: 0in; margin-bottom: 0.0001pt; margin-left: 0in; font-size: 12pt; font-family: 'Times New Roman'; "><font size="2" face="Courier New"><span style="font-size: 10pt; font-family: 'Courier New'; ">Jerry<o:p></o:p></span></font></div><div style="margin-top: 0in; margin-right: 0in; margin-bottom: 0.0001pt; margin-left: 0in; font-size: 12pt; font-family: 'Times New Roman'; "><font size="2" face="Arial"><span style="font-size: 10pt; font-family: Arial; "><o:p> </o:p></span></font></div></div>_______________________________________________<br>ACT-R-users mailing list<br><a href="mailto:ACT-R-users@act-r.psy.cmu.edu" style="color: blue; text-decoration: underline; ">ACT-R-users@act-r.psy.cmu.edu</a><br><a href="http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users" style="color: blue; text-decoration: underline; ">http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users</a><br></div></span></blockquote></div><br><div>
<span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-size: medium; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; "><div><br class="Apple-interchange-newline">###########################</div><div>Leendert van Maanen</div><div>Department of Artificial Intelligence</div><div>University of Groningen</div><div><br></div><div>P.O.Box 407</div><div>9700 AK Groningen</div><div>The Netherlands</div><div><br></div><div>W: <a href="http://www.ai.rug.nl/~leendert">http://www.ai.rug.nl/~leendert</a></div><div>E: <a href="mailto:leendert@ai.rug.nl">leendert@ai.rug.nl</a></div><div>T: +31 50 363 7603</div><div><div>###########################</div><div><br></div></div></span><br class="Apple-interchange-newline">
</div>
<br></div></body></html>