<html>
<body>
Hi Richard,<br><br>
The absolute activation level is relatively unimportant; what matters are
the *differences* between the BLAs of competing chunks. One place that
might help you develop a quantitative feel for the BLA is the following
paper, to appear in Psych Review in April 2005. Figure 9
illustrates the "typical activation dynamics" for a frequently
used chunk and for an infrequently used one. You might also want to check
Equations 15 and 16 and the surrounding discussion. Eq. 15 is the
theoretically sound, non-optimized learning rule; Eq. 16 is the optimized
approximation. The paper can be downloaded from<br>
<a href="http://www.socsci.uci.edu/~apetrov/pub/biganchor/" eudora="autourl">http://www.socsci.uci.edu/~apetrov/pub/biganchor/</a><a href="http://www.socsci.uci.edu/~apetrov/pub/biganchor/../../">Petrov,
A.</a> & <a href="http://act-r.psy.cmu.edu/people/ja/">Anderson, J.
R.</a> (2005). The Dynamics of Scaling: A Memory-Based Anchor Model of
Category Rating and Absolute Identification. Psychological Review, 112
(2), xxx-xxx.<br>
You intuition is correct -- giving a chunk a large number of prior uses
does make its BLA higher and less volatile. To reduce volatility
even further, set the creation time to some relatively recent moment in
time, and then reduce the number of uses accordingly. The (approximate)
algorithm that updates the BLA assumes that the old uses are evenly
distributed throughout the lifetime of the chunk. So, if an old
chunk with many uses can still experience a sharp transient boost just
after each subsequent use. By contrast, the BLA of a chunk with several
*recent* uses is much less sensitive to recency effects.Good luck!<br>
-- Alex<br><br>
-------------------------------------------------------<br>
Alexander A Petrov: apetrov@uci.edu<br><br>
Post-doctoral Researcher<br>
Department of Cognitive Sciences<br>
University of California, Irvine<br>
<a href="http://www.socsci.uci.edu/~apetrov/" eudora="autourl">http://www.socsci.uci.edu/~apetrov/<br><br>
</a>It is better to light one candle than to <br>
curse the darkness. --- Confucius ---<br>
-------------------------------------------------------<br><br>
<br><br>
At 10:31 AM 3/1/2005, Richard M Young wrote:<br>
<blockquote type=cite class=cite cite="">Hello Acters,<br><br>
I'd like to ask about advice and "good practice" (or failing
that, "common practice" or even just "practice")
about setting the base-level activation for familiar items, such as
frequently used words.<br><br>
For example, say in my case a chunk which links my written name
"Richard" with the concept (or "meaning") RICHARD, is
very familiar to me, I've encountered it frequently and recently and
therefore its base-level activation (BLA) should be high, much higher
than say an item that I've encountered just once, several seconds
ago. It will therefore be retrieved much faster, so to get
realistic latencies from an Act-R model it is important to set the BLAs
at least roughly right. Furthermore, the BLA of a long-familiar
item should not decay measurably during the performance of a
task.<br><br>
Q1. I don't really have a quantitative feel for the BLAs.
Should it be around say 5.0 in a case like that, does anybody
know?<br><br>
I'm not sure how to set it appropriately, and the means for doing so
interact with whether base-level learning is on or not, and with whether
optimised learning is on or not.<br><br>
One can use the command (set-base-level ...) to set the BLA directly, but
only if :bll is off. (And incidentally, turning :bll on later seems
to cause problems for Act.)<br><br>
If :bll is on, then the command (set-base-level ...) has to be used with
frequency and recency parameters. For example, I've found that
specifying an N of 10000 with a time of -20000 gives a BLA of around
5.0.<br><br>
Q2. What sort of numbers do other modellers use for a familiar item
of this kind?<br><br>
However ... if one turns optimised learning off, either before the
(set-base-level ...) command or after it, Act-R gets hung up and, at
least for those numbers I've used, runs out of memory (!).<br><br>
Q3. So, what is a sensible BLA for a familiar chunk, and how does
one set it if one wants to run the model with optimised learning off for
the items that are learned during the course of the model run?<br><br>
Any advice gratefully received,<br>
-- Richard<br><br>
_______________________________________________<br>
ACT-R-users mailing list<br>
ACT-R-users@act-r.psy.cmu.edu<br>
<a href="http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users" eudora="autourl">http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users</a><br>
</blockquote></body>
</html>