[ACT-R-users] advice/practice about setting base-level for familiar items

Richard M Young r.m.young at acm.org
Tue Mar 1 13:31:23 EST 2005


Hello Acters,

I'd like to ask about advice and "good practice" (or failing that, 
"common practice" or even just "practice") about setting the 
base-level activation for familiar items, such as frequently used 
words.

For example, say in my case a chunk which links my written name 
"Richard" with the concept (or "meaning") RICHARD, is very familiar 
to me, I've encountered it frequently and recently and therefore its 
base-level activation (BLA) should be high, much higher than say an 
item that I've encountered just once, several seconds ago.  It will 
therefore be retrieved much faster, so to get realistic latencies 
from an Act-R model it is important to set the BLAs at least roughly 
right.  Furthermore, the BLA of a long-familiar item should not decay 
measurably during the performance of a task.

Q1.  I don't really have a quantitative feel for the BLAs.  Should it 
be around say 5.0 in a case like that, does anybody know?

I'm not sure how to set it appropriately, and the means for doing so 
interact with whether base-level learning is on or not, and with 
whether optimised learning is on or not.

One can use the command (set-base-level ...) to set the BLA directly, 
but only if :bll is off.  (And incidentally, turning :bll on later 
seems to cause problems for Act.)

If :bll is on, then the command (set-base-level ...) has to be used 
with frequency and recency parameters.  For example, I've found that 
specifying an N of 10000 with a time of -20000 gives a BLA of around 
5.0.

Q2.  What sort of numbers do other modellers use for a familiar item 
of this kind?

However ... if one turns optimised learning off, either before the 
(set-base-level ...) command or after it, Act-R gets hung up and, at 
least for those numbers I've used, runs out of memory (!).

Q3.  So, what is a sensible BLA for a familiar chunk, and how does 
one set it if one wants to run the model with optimised learning off 
for the items that are learned during the course of the model run?

Any advice gratefully received,
-- Richard




More information about the ACT-R-users mailing list