representation of input text in a conversation
Richard L. Lewis
rick at cis.ohio-state.edu
Thu Mar 25 11:28:34 EST 1999
> The reason I did it this was was because I wanted to be able to look for
> consecutive word combinations. For example, to determine if it were a
> question, you
> might want to look for a verb followed by a noun, like "do you" rather
> than "you do." With a chain of chunks, you can do this with one
> production. With the ordinal position, I guess you'd have to go through
> the whole list? But perhaps this is best, psychologically. Any comments?
In a sentence processing model I'm developing, I have adopted a positional
encoding scheme as in the list STM model. But to handle many basic
word-order issues such as "do you" vs. "you do", all that is needed is an
ability to distinguish the current word from previous words. It is amazing
how much parsing one can do with just this distinction (assuming incremental
parsing). I assume that the current word being read is in a distinguished
goal focus slot; this would be enough to tell apart "do you" from "you do".
To be sure that the model retrieves the most recent word/constituent in
those cases where it matters, the production just matches the 'position' tag
to the current position. If the similarities are set up correctly (as in
the serial recall model), the partial matching will penalize the more
distant attachments more, ensuring retrieval of the more recent item.
(There are some interesting cases that arise cross-linguistically where the
correct attachment is in fact the more distant one; in those cases, you
match on "first-position").
-Rick L.
-----------------------------
Richard L. Lewis
Assistant Professor
Department of Computer and Information Science
Ohio State University
2015 Neil Avenue
Columbus, OH 43210
rick at cis.ohio-state.edu
http://www.cis.ohio-state.edu/~rick
614-292-2958 (voice) 614-292-2911 (fax)
More information about the ACT-R-users
mailing list