Error gradients through associative learning in R/PM
ERIK M. ALTMANN
altmann at gmu.edu
Thu Oct 7 11:58:56 EDT 1999
Last week I introduced an associative-learning model that accounted
for positional uncertainty and unpacked partial matching. In that
model I made a dual code assumption, in which an element was
represented by a positional code as well as an item code. There's
independent empirical evidence for this distinction, but it wasn't
directly constrained by ACT theory itself.
It turns out that the distinction maps directly onto the dual-code
representation used in ACT-R/PM's vision module. In preparation for
moving visual attention, the vision module finds a new location
pre-attentively and represents it in DM as a chunk. Cognition takes
this chunk and cycles it back to vision as the target for attention.
Vision then outputs a chunk representing the object at that location.
In current R/PM, the visual-location and visual-objects chunks are
linked to each other symbolically. That is, the name of one chunk is
a slot value in the other chunk, in both directions. There are good
reasons to do this, but it's not clear that perfect symbolic links are
the most accurate assumption. If memory is subject to noise, then
incorrect associations should be possible here as in any other memory
representation. Moreover, some of Pylyshyn's studies at least
indicate that visual indexes are subject to noise and can become
re-bound to incorrect objects.
If one relaxes the assumption that visual locations and visual objects
are perfectly linked, and requires that links be formed by ACT-R's
associative learning mechanism, then positional uncertainty falls out,
kerplunk. In the resulting representation, a visual location
generally maps to the corresponding visual object and vice versa, but
pointers can go awry. Given the dynamics of base-level activation,
locations or objects retrieved in error will be near neighbors
(temporally) more often than they will be far neighbors, so
association errors (and hence recall errors) will be near misses more
often than far misses.
The other wrinkle in this representation is that visual objects don't
point to their own visual locations, but to the visual location of the
*next* element. This provides an efficient episodic representation in
which there is some redundancy between visual and object codes, in
that locations point to objects and objects point to neighboring
locations, but there is also a way to trace forward through sequences
of events, something we seem to do quite naturally when tracing
episodes in our mind's eye.
Files:
hfac.gmu.edu/people/altmann/nairne-rpm.txt Model code and R/PM mods
hfac.gmu.edu/people/altmann/nairne-rpm.xl Model fits
-----------------------
Erik M. Altmann
Psychology 2E5
George Mason University
Fairfax, VA 22030
703-993-1326
altmann at gmu.edu
hfac.gmu.edu/~altmann
-----------------------
More information about the ACT-R-users
mailing list