[ACT-R-users] Need capability to change chunk type when new slots are added to an existing chunk
Ball, Jerry T Civ USAF AFMC 711 HPW/RHAC
Jerry.Ball at mesa.afmc.af.mil
Mon Feb 9 11:33:50 EST 2009
ACT-R 6 currently provides a capability to add slots to an existing
chunk using P* productions. However, it does not provide a capability to
change the type of the chunk to which the slot is added in the process.
This is problematic in that new slots added to an existing chunk cannot
be referred to in subsequent productions, since the chunk type is
inconsistent with the existence of the new slots.
To see where this causes problems in a model of language processing,
consider the case of verbs like "sneeze" which typically occur in
sentences like "she sneezed" where they project intransitive verb
constructions. Such verbs occasionally occur in larger "caused-motion"
constructions as in "she sneezed the napkin off the table" (this is an
important area of research in Construction Grammar). If the processing
of the verb "sneezed" leads to projection of an intransitive verb
construction, then when "the napkin" and "off the table" are processed,
there will be no slots in the intransitive verb construction into which
these constituents can be integrated. Using P* productions, slots can be
created for integrating "the napkin" and "off the table" into the
intransitive verb construction. However, in subsequent processing, these
new slots cannot be referenced since they aren't part of the chunk type
definition for intransitive verb construction chunks. Currently, the
only way I know of to handle this in ACT-R 6 is to create a new chunk
type "caused-motion" which contains the extra slots when "the napkin"
and "off the table" are processed, and then copy all the slot values
from the intransitive verb construction to the new caused-motion
construction in addition to integrating "the napkin" and "off the table"
into the new chunk. It would be computationally simpler to just add the
slots to the intransitive verb construction and change the chunk type to
reflect the additions, i.e. to caused-motion in this example. Besides
being computationally simpler, there are other reasons for preferring
this approach. If the intransitive verb construction is integrated into
a larger linguistic unit, then when the new caused-motion construction
is created, all references to the intransitive verb construction must be
tracked down and replaced. For example, in "she sat down and then she
sneezed the napkin off the table" if the intransitive verb construction
projected by "sneezed" is integrated as an argument of the conjunction
"and then" prior to the processing of "the napkin" and "off the table"
(as in the current model), then it will be necessary to replace the
reference to the intransitive verb construction with the reference to
the caused-motion construction. In general, this may require significant
processing effort.
Although important in Construction Grammar, the use of an intransitive
verb like "sneezed" in the caused-motion construction is fairly
uncommon. However, the need for a capability to dynamically add slots to
and adjust the chunk type of a chunk is actually quite pervasive in
language processing. Consider for example, ditransitive verbs like
"give". There are two distinct ways in which ditransitive verbs are
used, with an indirect object as in "he gave me the book" and with a
prepositional phrase argument as in "he gave the book to me". At the
time the verb is processed, which form will actually occur is
undetermined. Currently in our language model, a verb like "gave"
projects a ditransitive construction which contains slots for both an
indirect object and the "to" prepositional phrase. This allows the model
to handle both forms without having to create new chunks on the fly for
each case. With an ability to dynamically modify chunk types after
adding needed slots, the model could project the more likely chunk type
given the context and frequency of use (i.e. either indirect object or
"to" prepositional phrase") and still handle the alternative form when
it occurs, rather than having to provide slots for both possibilities in
a single construction. To support subsequent processing, we would like
the chunk type to accurately reflect the chunk. To the degree that this
is not the case, problems are likely to result.
Verb-particle compounds which are ubiquitous in English, provide another
example of the need for this capability. If the input is "he looked...",
what construction should the verb "looked" project? According to
Longman's dictionary, the most common use of "look" is in combination
with "at" as in "he looked at the book", but "look" also occurs in
combination with an adjective as in "he looked happy" and with a range
of different particles as in "look up", "look over", "look for" where
the meaning of the expression depends on the combination of "look" and
the preposition--indicating storage of these verb-particle combinations
as a unit in the mental lexicon. Besides needing to store verb-particle
combinations to determine meaning, the argument expectations vary with
each combination. Thus, we can say "he looked up" or "he looked it up"
or "he looked the name up" or "he looked up the name" (and in spoken
language "look it!" is becoming more and more common) but not "he looked
it at" or "he looked the book at". To handle all these possibilities,
some kind of accommodation mechanism is needed which does not involve
backtracking and is unlikely to be a repair mechanism--given the ease
with which humans process such variability. At the processing of the
verb "looked" the model should project the construction which is the
best candidate given the current context and prior history of use, but
the model must be prepared to accommodate the subsequent input. Humans
appear to be very good at handling this kind of variability, typically
being unaware that there are multiple possibilities at each choice
point. The ability to add slots to existing chunks using P* productions,
combined with a capability to dynamically adjust the chunk type of the
resulting chunk appears to provide just the needed capability. The
alternative of having to project new chunk types and copy over slot
values and adjust preceding references requires more computation than is
likely to be consistent with human language processing of such inputs.
More generally, such a mechanism would give ACT-R a functionally
motivated capability to create new chunk types, providing a learning
mechanism that appears to be needed to support the learning of new
construction types during language acquisition. If children do not come
with a full construction ontology built in, then some mechanism for
extending previously learned constructions to novel constructions is
needed. Allowing for the addition of slots to existing chunks, combined
with a capability to modify the chunk type--when the linguistic input
warrants it--perhaps creating a new chunk type in the process, would
provide just such a mechanism.
Jerry
Jerry T. Ball
Senior Research Psychologist
Human Effectiveness Directorate
711th Human Performance Wing
US Air Force Research Laboratory
6030 South Kent Street, Mesa, AZ 85212
PH: 480-988-6561 ext 678; DSN: 474-6678
Jerry.Ball at mesa.afmc.af.mil
www.DoubleRTheory.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/act-r-users/attachments/20090209/3745258b/attachment.html>
More information about the ACT-R-users
mailing list