[ACT-R-users] Need for Multi-Level Activation Spread in ACT-R

Leendert van Maanen L.van.Maanen at ai.rug.nl
Fri Sep 18 03:35:11 EDT 2009


Jerry,

When developing a new, more detailed retrieval mechanism for ACT-R  
(called RACE/A, see also www.ai.rug.nl/~leendert/race), we ran into  
similar issues. That is, we were interested in multilink spreading  
activation to compute the activation levels of multiple chunks  
*during* a memory retrieval. In addition, for our model of picture- 
word interference (Van Maanen & Van Rijn, 2007 CSR) we also required  
the capability of spreading activation from a stimulus (a word or a  
picture) to multiple chunks, analoguous in your airspeed/arispeed  
example. To achieve this, we used the retrieval-set-hook to compute  
activations and add-sji to manually set the spreading activation  
between the chunks. This is similar to what Dan suggested.

One of the virtues of our approach is that during retrieval of one  
chunk other, associated (set with add-sji), chunks will also increase  
in activation, allowing for the interactions between pos/letter  
information that you are interested in. One of the drawbacks is that  
computations will slow down tremendously, as you said. We haven't  
tested RACE/A on large DMs, but with a recalculation of every  
activation value every 5ms model runs take long, even for small DM  
sizes.

Leendert

On 14Sep 2009, at 19:45 , Ball, Jerry T Civ USAF AFMC 711 HPW/RHAC  
wrote:

> We are in the process of mapping the linguistics representations  
> that are generated by our language comprehension model into a  
> situation model based semantic representation. We are trying to do  
> this in a representationally reasonable way within the ACT-R  
> architecture. The problem we face is the many-to-many mapping  
> between words and concepts. Individual words may map to multiple  
> concepts, and individual concepts may may to multiple words. Given  
> this many-to-many mapping, we would like to use mapping chunks to  
> map from words to concepts. The mapping chunks would encode a single  
> mapping relationship (e.g. a separate mapping chunk to map from the  
> word "bank" to the financial institution concept; from the word  
> "bank" to the river bank concept; from the concept dog to the word  
> "dog"; from the concept dog to the word "canine"). When processing a  
> word, the goal is to retrieve the contextually relevant concept. We  
> would like to accomplish this in a single retrieval, however, we do  
> not know how to do this given the single-level activation spreading  
> mechanism in ACT-R. Since there is no direct link between a word and  
> a concept if mapping chunks are used (i.e. there is no slot in the  
> concept that contains the word), the word will not spread activation  
> to the concept. Instead, given the use of mapping chunks, it appears  
> that two retrievals are needed: 1) given the word, retrieve a  
> mapping chunk, and 2) given a mapping chunk, retrieve a concept.  
> Since our model of language comprehension is already slower than  
> humans at processing language, any extra retrievals are problematic.  
> In fact, we have already eliminated an extra retrieval in  
> determining the part-of-speech of a word. Previously, two retrievals  
> were needed: 1) retrieve the word corresponding to the perceptual  
> input, and 2) given the word (and context) retrieve the part-of- 
> speech of the word. While we were successful in eliminating a  
> retrieval, the resulting word-pos chunks contain a mixture of word  
> form information (e.g. the letters and trigrams in the word) and pos  
> information. Even so, they do not yet contain any representation of  
> phonetic, phonemic, syllabic or morphemic information. With just  
> letter and trigram information, long words contain many slots.  
> Ideally, we would like to represent letter and trigram information  
> independently of each other and POS information (allowing them to  
> interact in retrieving a word), but given the single-level  
> activation spreading mechanism in ACT-R doing so would necessitate  
> multiple independent retrievals, which would fail to capture the  
> interaction of letter and trigram information that leads to  
> successful retrievals of words in the face of variability in the  
> perceptual form (e.g. "arispeed" should retrieve "airspeed").
>
>       The fall back for mapping words to concepts is to embed all  
> the possible concepts as slot values in a word and vice versa. While  
> we consider this a representationally problematic solution -- word  
> and concept chunks will wind up needing many extra slots, we do not  
> know how else to work around the single-level activation spread in  
> ACT-R.
>
>       The primary empirical argument against the need for multi- 
> level activation spread in ACT-R is based on studies which show no  
> activation from words like "bull" to words like "milk", even though  
> "bull" activates "cow" and "cow" activates "milk". Even if it is  
> true that there are no instances of "indirect" activation from  
> "bull" to "milk", this does not rule out the need for multi-level  
> activation spread. There is a hidden assumption that "cow" and  
> "bull" are directly associated, and that "cow" and "milk" are also  
> directly associated. Such direct associations may seem reasonable in  
> small-scale models addressing specific spreading activation  
> phenomena, but they are questionable in a larger-scale model. Do we  
> really want to include all the direct associates of "cow" as slot  
> values in the "cow" chunk, and do the same for all other chunks?
>
>       We understand that the inclusion of a multi-level activation  
> spreading mechanism in ACT-R would be computationally explosive.  
> However, we would like to have the capability to explore use of such  
> a mechanism and to look for ways to keep it computationally  
> tractable. We have already dealt with the problem of computational  
> explosion in our word retrieval mechanism. Originally, we attempted  
> to use a "soft constraint" retrieval mechanism for words. All words  
> in DM were candidates for retrieval--the most highly activated word  
> being retrieved. With just 2500 words in DM, the activation  
> calculations slowed the model down considerably. To manage  
> retrievals in a tractable manner we implemented a disjunctive  
> retrieval capability combined with a new perceptual span mechanism  
> -- the model first tries a hard-constraint retrieval on the entire  
> perceptual span (which is larger than a word) using the "get-chunk"  
> function (and chop-string under the covers). If get-chunk succeeds  
> (indicating that there is a chunk in DM corresponding to the entire  
> perceptual span) a retrieval is constructed using the entire  
> perceptual span as a hard constraint to retrieve the corresponding  
> multi-word unit in DM, if this fails, the model backs-off and uses  
> the first space delimited word (using chop-string) in the perceptual  
> span to check for a corresponding word in DM -- if a match is found  
> with get-chunk, a retrieval is constructed to retrieve the word. If  
> all else fails, we construct a retrieval that imposes a hard  
> constraint on the first letter (this is less than ideal, but a  
> reasonable compromise). The overall effect is a (nearly) soft- 
> constraint retrieval implemented in a computationally tractable way.
>
>       A similar capability to effect multi-level activation spread  
> in a computationally tractable manner would be highly desirable.
>
> Jerry
>
> _______________________________________________
> ACT-R-users mailing list
> ACT-R-users at act-r.psy.cmu.edu
> http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users


###########################
Leendert van Maanen
Department of Artificial Intelligence
University of Groningen

P.O.Box 407
9700 AK Groningen
The Netherlands

W: http://www.ai.rug.nl/~leendert
E: leendert at ai.rug.nl
T: +31 50 363 7603
###########################



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/act-r-users/attachments/20090918/4c52d2b4/attachment.html>


More information about the ACT-R-users mailing list