associative learning
Wolfgang Schoppek
Wolfgang.Schoppek at uni-bayreuth.de
Thu Apr 8 04:59:09 EDT 1999
I have some difficulties with the associative learning mechanism.
The task of my model is to observe successive states of a system
consisting of four switches and four lamps. Later on the model does a
recognition task. For every system state a chunk of the following form
is created:
goal-12
isa triplet
switches swi03
lamps la03
context nil
In the recognition section a probe is shown which looks like that (after
some basic processing):
probe1
isa probe
swi1 swi03
la1 la03
swi2 nil
la2 nil
Since some states are shown only once, the baselevels of the
corresponding triplets are very low. Therefore retrieval depends
strongly on the associative weights between the swi0x and la0x chunks
and the triplets (representing system states).
My first problem:
Because of continued creation of new chunks and the Prior Strength
Equation S*ji = ln(m) - ln(n) states shown later are stronger associated
with their components as compared to states shown earlier. That effect
is due to the fact that initially there are only about 75 chunks in
declarative memory and cannot be found in the data.
- Is the assumption of the initial number of chunks <100 realistic?
(If it were 1000 chunks initially, the unwanted effect would almost
disappear.)
- Has anybody encountered a similar problem?
- Are there solutions to circumvent the effect?
My second problem:
Consider goal-12 as an example. When the slots of goal-12 are filled
with the chunks swi03 and la03 there are IAs between swi03 / la03 and
goal-12 of about 3.7 ( ln(84) - ln(2) ).
In the next cycle a subgoal is pushed - which does not affect the IAs.
But after the first cycle of the subgoal the IAs between swi03 / la03
and goal-12 suddenly drop to 2.5 although none of the above chunks is
involved in the processing of the subgoal. -> ???
After popping the subgoal there are a few cycles of idleness (goal-12 on
top of the stack) coming along with further falling of the IAs (I
understand that). After popping goal-12 the IAs are about 1.5 (that
depends on the number of idle cycles).
- Can anybody explain me the sudden drop of IAs in the first cycle of
the subgoal? (I cannot find the answer in the 1998 book.)
In one version of my model turning off associative learning improves the
fit. That would not be so bad if not one version of the model
(simulating another experimental condition) made extensive use of that
learning mechanism.
-- Wolfgang
--------------------------------------------------------------------
Dr. Wolfgang Schoppek <<< Tel.: +49 921 555003 <<<
Lehrstuhl fuer Psychologie, Universitaet Bayreuth, 95440 Bayreuth
http://www.uni-bayreuth.de/departments/psychologie/wolfgang.htm
--------------------------------------------------------------------
More information about the ACT-R-users
mailing list