representation of input text in a conversation

Bruno Emond bruno_emond at UQAH.UQuebec.CA
Thu Mar 25 12:21:41 EST 1999


Hi Jim.  Here are some short comments.

>What are the advantages?
>1) The you can only focus on one goal at a time, so the other chunks
>involved in the sentence are just in memory like any other. To distinguish
>them from words in other sentences, they have an identifier. You can think
>of this as a timestamp of some sort like Christian suggested in his
>ACT-R/PM  email. This is also how Anderson did it with his "parent" slot.
>
>This is an improvement from my original suggestion, in which the chain of
>words didn't know what sentence they belonged to.  The model needs to know
>this, though, so it doesn't confuse the focused sentence with others in
>memory.
>

I think there is some confusion here about the role of the sentence slot.
The sentence slot is an indication of the group to which the word belong.
I do not think it plays the role of a timestamp. Your timestamp is
implicit in you next-word slot.

>2) The next-word slot points to a chunk name, rather than
>an ordinal position,  as in Anderson's suggestion:
>The reason I did it this was was because I wanted to be able to look for
>consecutive word combinations. For example, to determine if it were a
>question, you
>might want to look for a verb followed by a noun, like "do you" rather
>than "you do." With a chain of chunks, you can do this with one
>production.  With the ordinal position, I guess you'd have to go through
>the whole list? But perhaps this is best, psychologically. Any comments?
>

It is not clear if your model assumes that the representation of words
in sentences is limited to a linear organisation.  If it is the case
your framework might be ok.  Although, if you want to model parsing, which
one can say is a process of word grouping amoung other things, then you
might run into expansive computations to represent explicitely not only
the sequence of words but also the sequence of group of words.  In my
models of parsing I use an integer value representing the timestamp.  As
Richard L. Lewis mentions, the focus of a production is always the current
word being processed and a retrieval of the adjacent word can be obtained
either by computing the previous timestamp (!eval! (- Current-time 1)) or
using similarities (Richard proposal).

Bruno.


***********************************************
Bruno Emond, Ph.D.
bruno_emond at uqah.uquebec.ca
Tel.:(819) 595-3900  1-4408
T=E9l=E9copieur: (819) 595-4459
- D=E9partement des Sciences de l'=C9ducation,
  Professeur Agr=E9g=E9
  Universit=E9 du Qu=E9bec =E0 Hull.
  Case Postale 1250 succursale "B",
  Hull (Qu=E9bec), Canada. J8X 3X7
- Institute of Interdisciplinary Studies:
  Cognitive Science.
  Adjunct Research Professor
  Carleton University
***********************************************





More information about the ACT-R-users mailing list