Parameter learning problem

Marsha Lovett lovett+ at CMU.EDU
Wed Oct 8 11:40:48 EDT 1997


I will pipe in too since Chris (Schunn) just alerted me to these emails.

I have definitely encountered this conflict-resolution situation and can
offer one easy practical solution:  changing the value of G (from the
default 20) changes the trade-off between costs and accuracy.  For
example, with G=10, the difference in q's is de-emphasized and the two
productions have PG-C of 4.8 and 5.  In this situation, the noise would
be enough to get the system to try retrieving (long enough to see that q
eventually > 0.5)

Niels's first solution (changing the parameter learning scheme for q) is
interesting.  It highlights the possibility that one can always design a
model that emphasizes cost-learning over probability learning.  Also,
note that the current interpretation of q for retrieval productions is
relatively new, so there may be some backlash on the P=q*r equation.

The second solution (and Chris S's first suggestion) are radical in that
they imply retrieval always preceds computation.  A slightly less
radical version might be to include in models the combo rule Niels
describes (if retrieval fails push a compute goal) along with a
competing compute rule.  This way, one could short-circuit to computing
w/o having first failed at the retrieval.

Finally, Chris's notion of incorporating the "familiarity with the
problem" effect on these conflict-resolution issues is one that has been
around.  His idea of allowing goals that are just new versions of
previously held goals have more activation is interesting.  It means
changing the semantics of goal activation though.  One could imagine
that the activation of a goal is a JOINT function of its familiarity (as
in chunk practice) and the attention directed toward it (as in some
amount of W).  This would produce effects like the following: it is
easier to "attend" to a familiar goal (requires less W to make the goal
useful for other retrievals).

--Marsha 





More information about the ACT-R-users mailing list