<div>I wanted to make one last post to the list to let everyone know that I did find the answer to my own question last night and have enough information to explore further.</div>
<div> </div>
<div>I found log likelihood in Bayesian networks on p. 716 in Russell and Norvig: AI--A Modern Approach, which describes the answer to my question almost in full. Surely, there has to be others who would want to know this. That this stuff is in Russell and Norvig may be helpful to many.</div>
<p>Anderson reformed his Activation formula into a maximum-likelihood parameter learning problem that used log likelihood:</p>
<p>Log[posterior(i|C)] = log[prior(i)] + Summation from for members j of C of the log[likelihood(j|i)]</p>
<div>in which the components of his Activation formula line up with each piece of this formula respectively. That makes ACT-R a statistical inference method overall. There is more to read up on this in Russell and Norvig, but it seem this is a good starting point for those looking at ACT-R from a computer science perspective.</div>
<div> </div>
<div>A hearty thanks to Darryl for the reference he sent. </div>
<div> </div>
<div>Karri Peterson</div>