From rijn at swi.psy.uva.nl Mon Nov 2 11:04:36 1998 From: rijn at swi.psy.uva.nl (Hedderik van Rijn) Date: Mon, 2 Nov 1998 17:04:36 +0100 (MET) Subject: ActR Emacs mode Message-ID: I have written an Emacs mode to facilitate the editing of Act-R code in Emacs. Some of the features of this mode are: - syntax highlighting - an identifier menu to ease navigation in the source files - "automagic" indentation of production rules - outline mode - pretty printing of production rules This mode has been tested with GNU Emacs 19.34 and 20.2/3. More information can be found on: http://www.swi.psy.uva.nl/usr/rijn/actr/actr-mode.html If you find any bugs, of have suggestions for improvement, please let me know. - Hedderik. -- http://www.swi.psy.uva.nl/usr/rijn From jimmyd at cc.gatech.edu Mon Nov 9 17:38:22 1998 From: jimmyd at cc.gatech.edu (Jim Davies) Date: Mon, 9 Nov 1998 17:38:22 -0500 (EST) Subject: representation of input text in a conversation Message-ID: ACT-R users: I am trying to make a model of an orangutan that can use sign language. The eventual goal is to have a real time orangutan simulation that people can talk to. The model will respond like the orangutan. Hopefully the system will simulate simple conversations. So what I need is a way to input user's text. There will be a seperate, non-ACT speech recognition module that will be feeding the ACT-R model text. So the input must be in the form of a chunk, maybe a goal: (goal-one word1 how word2 are word3 you answer nil) and then the goal would pop when an answer is found. This is similar to the addtion examples: (goal-two arg1 one arg2 eight operator plus answer nil) This is not satisfying for a few reasons. One is that the number of words in the input sentence is variable, and its inelegant to have a "word1, word2, .... , wordN" for the number of possible words in the sentence. Also, for whatever N might be, the system would not take a sentence with n+1 words. Also, there would usually be lots of 'nil's in word slots. So I thought about it a bit and came up with the idea of a linked list to represent the sentence, like the number line is represented in some ACT models. Following is a representation of the utterance "Who are you": (goal-three sentence s1 response nil) (s1 word who last nil next s2) (s2 word are last s1 next s3) (s3 word you last s2 next nil) Then the idea is that a goal would be set to determine if the utterance is a statement, question, or command, and push subgoals from there. I welcome any thoughts about this representation, both psychological and technical. For more information about this project, see: http://www.cc.gatech.edu/gvu/perception/projects/primatech/ JimDavies jimmyd at cc.gatech.edu home: 404-223-1366 http://www.cc.gatech.edu/~jimmyd/ From ja+ at CMU.EDU Mon Nov 9 17:49:28 1998 From: ja+ at CMU.EDU (John Anderson) Date: Mon, 9 Nov 1998 17:49:28 -0500 (EST) Subject: representation of input text in a conversation Message-ID: These are two of the standard solutions. The third is (goal-three sentence s1 response nil) (a1 word who parent s1 position first) (a2 word are parent s1 position second) (a3 word you parent s1 position third) I would argue that the data on serial memory are most consistent with this. From niels at tcw2.ppsw.rug.nl Tue Nov 10 04:06:24 1998 From: niels at tcw2.ppsw.rug.nl (Niels Taatgen) Date: Tue, 10 Nov 1998 10:06:24 +0100 Subject: representation of input text in a conversation Message-ID: Another question you must ask is whether representing the whole sentence is the right way to model language processing. An alternative way to process a sentence is to read in the words one by one, and gradually build an interpretation. If I remember correctly, this is the way a Soar model (NL-Soar) did this. So you might end up with a goal representation like: (goal-five word how interpretation nil ... [probably some more slots]) Once you have some interpretation for how (don't ask me how), you can read in the next word (goal-five word are interpretation interpretation-of-how) And so on. Once the whole sentence has been read, the interpretation slot contains a representation of the whole sentence. If interpretation of the sentence somehow goes wrong, because you chose the wrong interpretation for an ambiguous word, you can always read the sentence again. Since the orangutan probably cannot read, you'll have to repeat the sentence, or he'll not understand you. -- ------------------------------------------------------------- Niels Taatgen Technische Cognitiewetenschap/Cognitive science & engineering Grote Kruisstraat 2/1, 9712 TS Groningen, Netherlands 050-3636435 / +31503636435 niels at tcw2.ppsw.rug.nl http://tcw2.ppsw.rug.nl/~niels ------------------------------------------------------------- From Wolfgang.Schoppek at uni-bayreuth.de Tue Nov 10 08:14:59 1998 From: Wolfgang.Schoppek at uni-bayreuth.de (Wolfgang Schoppek) Date: Tue, 10 Nov 1998 14:14:59 +0100 Subject: representation of input text in a conversation Message-ID: The question how to represent complex stimuli occurs in many domains. The two extreme solutions are: a) there is a chunk representing the stimulus with slots representing the dimensions of the stimulus b) there is a chunk representing the stimulus and other chunks representing values on the dimensions with a slot that points to the stimulus chunk. (John Anderson4s solution) I have been convinced by the 1998 List Memory paper that solution b) is the best for that domain. But what about encoding of stimuli in categorization experiments? Should they also be represented according to solution b)? What criteria should be applied for the decision between several possibilities of representation? -- Wolfgang -------------------------------------------------------------------- Dr. Wolfgang Schoppek <<< Tel.: +49 921 555003 <<< Lehrstuhl fuer Psychologie, Universitaet Bayreuth, 95440 Bayreuth http://www.uni-bayreuth.de/departments/psychologie/wolfgang.htm -------------------------------------------------------------------- From Bonnie_John at cs.cmu.edu Tue Nov 10 08:56:16 1998 From: Bonnie_John at cs.cmu.edu (Bonnie John) Date: Tue, 10 Nov 1998 08:56:16 -0500 Subject: representation of input text in a conversation Message-ID: At 4:06 AM 11/10/98, Niels Taatgen wrote: >...An alternative way to process a >sentence is to read in the words one by one, and gradually build an >interpretation. If I remember correctly, this is the way a Soar model >(NL-Soar) did this.... Yes, that's how NL-Soar works. It gives you good match to human sentence processing of things like garden-path sentences (the ones where they take a sudden unexpected turn and you say "huh?" and then reinterpret, like "The robber ran to the bank and jumped in the river." It also predicts which sentences will be hard to parse in several languages. _If_ you need that much of a NL processor, you might consider using NL-Soar as a guide for an ACT-R model. Rick Lewis at Ohio State did NL-Soar and I believe there are several papers about it in the Linguistics literature, as well as his thesis and subsequent work. Bonnie From cl+ at andrew.cmu.edu Tue Nov 10 11:04:59 1998 From: cl+ at andrew.cmu.edu (Christian J Lebiere) Date: Tue, 10 Nov 1998 11:04:59 -0500 (EST) Subject: representation of input text in a conversation Message-ID: If we take the theory seriously, John's solution (the sentence as indexed list) and Niels' solution (the sentence as structured interpretation) are not exclusive but complementary representations. Presumably, since words are heard one at a time, the interface (e.g. ACT-R/PM) will record their occurrence in chunks like: (utterance23 isa sound type word time 123456 content "how") This is analogous to the list representation, and the time stamp can be seen as a rough approximation of the indices (first, second, etc). However, since those words are heard with the goal of understanding the sentence, a more abstract representation is also constructed as the sentence is heard and becomes a declarative memory chunk when the goal is popped: (sentence5 isa sentence type inquiry meaning proposition13) or however the internal meaning of a sentence is represented. This sentence does not necessarily hold all (or even any of) the words, but they can be retrieved independently (though not necessarily perfectly of course) from their individual encoding. Those representations are variants of Wolfgang (b) and (a) solutions respectively, with some differences (e.g. not necessarily all dimensions represented in (a) and no pointer to the stimulus chunk in (b)). Generally, this is a good example of the origin and structure of chunks as discussed on pp. 23-24 of the ACT book (www.erlbaum.com). From bruno_emond at UQAH.UQuebec.CA Tue Nov 10 12:39:15 1998 From: bruno_emond at UQAH.UQuebec.CA (Bruno Emond) Date: Tue, 10 Nov 1998 12:39:15 -0500 Subject: representation of input text in a conversation Message-ID: I have been working on modelling natural language processing with ACT-R for some time now. I am working on a paper (long overdue, thanks...) that would present some ideas that are included here. I am not familliar with NL-Soar but the question of parsing and semantic interpretation seems to go along the lines where a chunk holds various types of information such as the string time postion (thanks to Christian who sent his e-mail while I am writing this one...), the syntactic category, and the semantic interpretation. The following reflexions are probably going to be more relevant to human than to orangutan language processing (sorry Jim, although it might be helpful). I work within the framework of categorical grammar which classifies syntactic categories as basic or functional. =46or eamaple, An intransitive verb would be of the type NP\S which is a fun= ction that takes an adjacent NP to its left to produce an S. Basically, in a simple categorical grammar there are only two grammar rules (left and right application). These rules combine adjacent syntactic categories. The interest in categorical grammar is that there is a direct mapping between the syntactic and semantic functions based on lambda conversion. A possible implementation of a left application rule would have the following representation, where begin and end are integers representing the temporal position of the a string, category argument, result are syntactic labels, the meaning of the parse-chunk and a lambda-operator to indicate which slot sould be filled with the semantic argument. (p left-application.slot1 =3Dparse-chunk-under-focus> isa parse-chunk begin-string =3Dmiddle end-string =3Dend operator left argument =3Dargument result =3Dresult meaning =3Df-expression lambda-operator slot1 =3Dadjacent-parse-chunk> isa parse-chunk begin-string =3Dbegin end-string (!eval! (- =3Dmiddle 1)) category =3Dargument meaning =3Da-expression =3D=3D> =3Df-expression> isa semantic-expression slot1 =3Da-expression =3Dnew-parse-chunk> isa parse-chunk begin-string =3Dbegin end-string =3Dend category =3Dresult meaning =3Df-expression !focus-on! =3Dnew-parse-chunk ) There are multiple possible versions of this type of production. This version has a relative shallow goal structure because the semantic interpretation or a new new goal to create the new parse-chunk are not pushed. This question of shallow goal structure seems to me to be an important issue in modelling natural language processing because the whole process has to be quick. In the case of the left application rule, note that there has to be another production called to expand the value of result in case it is a functional category. All this takes time. One could think of a two chunks retrieval condition but it would fail in case of a non functional category. Another problem is to limit as much as possible productions that fail which also increase the overall processing time. For example, there is no empirical evidence, to my knowledge, that it takes more time to parse a sentence if its subject is a proper noun or a complete np. It seems reasonable on linguistic and performance grounds to distinguish these two basic categories as well as considering the proper noun as a subtype of noun phrase. It seems also reasonable to assume that there is only one category for an intransitive verb (takes an np a left adjacent category to produce a sentence). Although one could assume two lexical entries the retrieval of which would depend on the actual syntactic category just processed. Bruno. *********************************************** Bruno Emond, Ph.D. bruno_emond at uqah.uquebec.ca Tel.:(819) 595-3900 1-4408 T=E9l=E9copieur: (819) 595-4459 - D=E9partement des Sciences de l'=C9ducation, Professeur Agr=E9g=E9 Universit=E9 du Qu=E9bec =E0 Hull. Case Postale 1250 succursale "B", Hull (Qu=E9bec), Canada. J8X 3X7 - Institute of Interdisciplinary Studies: Cognitive Science. Adjunct Research Professor Carleton University *********************************************** From ritter at psychology.nottingham.ac.uk Wed Nov 18 12:52:31 1998 From: ritter at psychology.nottingham.ac.uk (ritter at psychology.nottingham.ac.uk) Date: Wed, 18 Nov 1998 17:52:31 GMT Subject: job going in cogsci/ai at Nottingham Message-ID: Further details by phone or email if you want 'em, or from Prof. Underwood. Cheers, Frank >From a memo to me: Applications are invited for lecturer with research and teaching interests in any of the core areas of cog sci or ai. Salary in the range 16.6 to 29k pounds per annum, depending on qualifications and experience. Informal enquiries to Prof. G. Underwood, +44 115 951-5313 email: geoff.underwood at nottingham.ac.uk Further details and application forms from Personnel Office, Highfield House, U. of Nottingham, Nottingham NG7 2RD. 44 (0)115 951 5927, email: julie.denham at nottingham.ac.uk please quote ref. LEG/430. closing date 4 December 1998.