List Memory and Alphabetic Retrieval

Todd R Johnson Todd.R.Johnson at uth.tmc.edu
Fri Feb 12 19:25:15 EST 1999


As many of you know, I have been working on models of computation and recall
in alphabetic retrieval (in the context of alphabet arithmetic). I'm
wondering if any of you can shed any light on the following problem.

In the original model I used a standard chunk-based representation for the
alphabet:

   (alpha1 ISA item first a second b third c fourth d last-pos fourth next
alpha2 parent alphabet)

This means that if you want to know what comes after b, you must retrieve
the chunk that b is in, then walk through the items until you reach b, then
retrieve the next letter. This is consistent with Klahr's model of
alphabetic retrieval and with the phenomenon that when we retrieve the chunk
containing a probe, we must then start at the beginning of the chunk in
order to locate the position of the probe.

There are several problems with this representation. One is that it is
completely inconsistent with that used in the Act-R models for serial and
list memory. One might argue that since the alphabet is a well-learned list,
it might deserve a different representation from lists such as those used in
typical serial recall experiments. However, this difference seems somewhat
troubling to me, and at best, is something that we should experimentally
evaluate.

A second problem is that backward recall is not much harder than forward
recall. Since each chunk points only to the next chunk, it takes longer to
retrieve the previous chunk, but backward recall within a chunk is just as
easy as forward recall within a chunk.

A third problem is that the data on alphabetic retrieval shows that the time
to retrieve the initial letter of each chunk is a function of that chunk's
position in the list. Several researchers have suggested that this means
that people serially retrieve all of the chunks that are located prior to
the chunk containing the probe letter. The contents of the prior chunks are
not checked explicitly, but somehow people know to stop at the correct
chunk. Nothing about the chunk-based representation enforces that kind of
processing. It is a simple matter to retrieve the chunk containing the probe
letter. There is no need to walk through the other chunks.

The Act-R models for serial and list memory used positional encoding, such
as:

(alpha1 isa group list alphabet position first size 7)
(atok isa token parent alpha1 position first name a list alphabet)
(btok isa token parent alpha1 position second name b list alphabet)

The serial recall model retrieves the group, then the elements in each group
for forward recall. For backward recall the model retrieves each group from
the first to the to-be-recalled group, then retrieves the elements of the
that group. This is consistent with Klahr's model of alphabetic retrieval.
However, I see no real need to do this, given the positional representation.
The model can easily retrieve the last group (or any other group) directly.
You might argue that the model would not know how long the list is, or how
many groups, but the subjects were aware of this information.

It is also unclear how to handle alphabetic retrieval given the positional
representation. If you want to know what letter comes after m, it seems most
obvious to simply recall the alphabet token containing m. This is consistent
with the Act-R model of recognition memory. However, once this token is
retrieved, it should be a simple matter to retrieve either the next token,
or the previous token. In other words, there is no real need to
step through all the groups to get to a later group. Nor is there any need
to back up to the first element in a group. Suppose that we assume that the
probe plus the alphabet list cue provide too little activation to
successfully retrieve the correct token. Now it makes sense to step through
each group, so that we can use the group name, plus the alphabet list, plus
the probe as cues. So as each group is retrieved, the model would attempt to
retrieve a token that is in that group and contains the probe. However, once
we retrieve the correct token, there is still no need to then go back and
retrieve the first token in the list. In otherwords, we can simply go
directly to the next or previous letter.

So it seems that neither representation directly requires the kind of access
that appears to be in use in human alphabetic retrieval. One alternative is
to use a purely associative representation of the alphabet list, but this
approach also has problems.

--- Todd

Todd R. Johnson                                 http://www.sahs.uth.tmc.edu/trjo
hnso
Associate Professor                                     todd.r.johnson at uth.tmc.e
du
UT-Houston, School of Allied Health Sciences            713-500-3921 (voice)
Dept. of Health Informatics                             713-500-3929 (fax)

"In the last 10 years we have come to realize humans are more like worms
than we ever imagined," Dr. Bruce Alberts, president of the National Academy
of Sciences.





More information about the ACT-R-users mailing list