From db30+ at andrew.cmu.edu Tue Mar 2 13:07:00 1999 From: db30+ at andrew.cmu.edu (Daniel J Bothell) Date: Tue, 2 Mar 1999 13:07:00 -0500 (EST) Subject: ACT-R in ACL Message-ID: Due to some recent concerns of the time required to run models using ACL 5.0 in Windows I ran some tests to determine what was causing the problem. The details of the tests can be found on our website at: http://act.psy.cmu.edu/ACT/ftp/models/compare/compare.html . Basically, I found that the default value of the ACT-R parameter *compile-eval-calls* should be nil instead of t. A new version of ACT-R with this and a couple of other changes to improve performance under ACL is now available from our web site at: http://act.psy.cmu.edu/ACT/ftp/release/ACT-R_4.0/index.html . A new version of the Environment for ACL will also be made, but there are other changes going into it, and it may be a month or so before it is ready. In general, I found that in ACL the modeler has to be a little more concerned about how the model is run. The two largest factors are the value of *compile-eval-calls* and whether the model is compiled before running. I can offer my observations as to how to set things for best performance based on the models tested. As far as *compile-eval-calls* goes, the (new) default value of nil should work best for most models. The major exception to this depends on how often your model calls reset and reload. If you model runs for a lot of cycles (what actually constitutes a lot depends on the model, roughly 1000 or more) without being reset or reloaded, then you may benefit from setting *compile-eval-calls* to t. Since clearall resets the default value, to change it you have to add (setf *compile-eval-calls* t) after the clearall call in the model. As for when to compile, most models should run fine without compiling. However, if a model has a lot of Lisp code called from the model (!eval! calls to functions in the productions) or a lot of setup code for each run (and it is run many times), then the benefit of compiling might be signifigant. One other thing to note is that ACL seems to have problems with a large output to the Debug window, and virtual memory thrashing after running a long time. During the tests, ACL caused an error when the text in the Debug window grew to over 5 megabytes. Also, after loading and running several models many times, virtual memory would thrash, increasing the runs for some models well over an hour. Changing the ACL default settings for garbage collection may very well fix those problems, but I did not look into doing so. If you have any questions or comments about the tests, or suggestions for increasing the performance of running models, please let me know. Dan From jimmyd at cc.gatech.edu Mon Mar 8 22:49:11 1999 From: jimmyd at cc.gatech.edu (Jim Davies) Date: Mon, 8 Mar 1999 22:49:11 -0500 (EST) Subject: random production firing Message-ID: I have 2 productions that I want to fire at random. That is, approximately 50% of the time p1 fires, and 50% of the time p2 fires. I have been unable to make this happen: If one of them has a higher P value, it always fires. If I make the P values the same, then it always chooses the one that comes first in the file. Is there any way to get this to happen? I've tried looking for P value noise, but haven't been able to find it. JimDavies jimmyd at cc.gatech.edu home: 404-223-1366 http://www.cc.gatech.edu/~jimmyd/ From niels at tcw2.ppsw.rug.nl Tue Mar 9 03:29:44 1999 From: niels at tcw2.ppsw.rug.nl (Niels Taatgen) Date: Tue, 09 Mar 1999 09:29:44 +0100 Subject: random production firing Message-ID: Jim Davies wrote: > I have 2 productions that I want to fire at random. That is, approximately > 50% of the time p1 fires, and 50% of the time p2 fires. > 1. make sure rational analysis is on: (sgp :era t) ; put this somewhere at the start of your model 2. there is no noise on P itself, put on the total expected gain. Just give it some value (sgp :era t :egs 0.05) ; put this somewhere at the start of your model This will give you a 50% probability of each rule firing, if both productions have the same expected gain (which they have if you do not change their parameters or put on parameters learning). -- ------------------------------------------------------------- Niels Taatgen Technische Cognitiewetenschap/Cognitive science & engineering Grote Kruisstraat 2/1, 9712 TS Groningen, Netherlands 050-3636435 / +31503636435 niels at tcw2.ppsw.rug.nl http://tcw2.ppsw.rug.nl/~niels ------------------------------------------------------------- From wallachd at ubaclu.unibas.ch Tue Mar 9 04:13:22 1999 From: wallachd at ubaclu.unibas.ch (Dieter Wallach) Date: Tue, 09 Mar 1999 10:13:22 +0100 Subject: random production fring Message-ID: --------------D390BAD4422AA5B9040FAA16 Content-Type: text/plain; charset=us-ascii; x-mac-type="54455854"; x-mac-creator ="4D4F5353" Content-Transfer-Encoding: 7bit > Jim Davies wrote: > > > I have 2 productions that I want to fire at random. That is, approximately > > 50% of the time p1 fires, and 50% of the time p2 fires. > > > Another possibility is to set the flag :er to T as specified in the manual, i.e.: Enable Randomness keyword: ER default value: NIL If enabled, ties among instantiations are broken randomly, rather than according to some unspecified but deterministic order. _______________________________________________ _______________________________________________ Dr. Dieter Wallach Institut fuer Psychologie Universitaet Basel Bernoullistr. 16 CH-4056 Basel _______________________________________________ WWW: www.unibas.ch/psycho/kognitiv/wallach.htm Email: wallachd at ubaclu.unibas.ch Tel: ++41 - 61 267 3525 Fax: ++41 - 61 267 3526 _______________________________________________ --------------D390BAD4422AA5B9040FAA16 Content-Type: text/html; charset=us-ascii Content-Transfer-Encoding: 7bit
Jim Davies wrote:

> I have 2 productions that I want to fire at random. That is, approximately
> 50% of the time p1 fires, and 50% of the time p2 fires.
>


Another possibility is to set the flag :er to T as specified in the manual, i.e.:

Enable Randomness
keyword:  ER default value:  NIL
If enabled, ties among instantiations are broken randomly, rather than according to some unspecified but deterministic order.
 

_______________________________________________
_______________________________________________
 Dr. Dieter Wallach
 Institut fuer Psychologie
 Universitaet Basel
 Bernoullistr. 16
 CH-4056 Basel
_______________________________________________
 WWW: www.unibas.ch/psycho/kognitiv/wallach.htm
 Email: wallachd at ubaclu.unibas.ch
 Tel: ++41 - 61 267 3525
 Fax: ++41 - 61 267 3526
_______________________________________________
  --------------D390BAD4422AA5B9040FAA16-- From cl at andrew.cmu.edu Tue Mar 9 09:32:36 1999 From: cl at andrew.cmu.edu (Christian Lebiere) Date: Tue, 09 Mar 1999 09:32:36 -0500 Subject: random production firing Message-ID: > I have 2 productions that I want to fire at random. That is, approximately > 50% of the time p1 fires, and 50% of the time p2 fires. An alternative to Niels' solution is to turn on Enable Randomness, e.g.: (sgp :er t) Instantiations of equal value are then ordered randomly in the conflict set. The advantage is that this works whether or not Rational Analysis is on. The disadvantage is that it only works if the production evaluations are exactly equal. Christian From tkelley at hel4.arl.mil Tue Mar 9 15:17:26 1999 From: tkelley at hel4.arl.mil (Troy Kelley) Date: Tue, 09 Mar 1999 14:17:26 -0600 Subject: random production firing - another problem Message-ID: Group, While we are discussion random production firing, I have a similar problem with a model I am working on. However, I have a slightly different problem. I figured out how do get productions (strategy selection) to fire randomly but als o within a normal distribution by manipulating the Eventual-Efforts parameter. Th e higher the the Eventual-Efforts parameter, the less often the production will fire. Now the problem I am having is that when I incorporate this logic into a bigger model, it has the tendency to screw up all my other productions, especially the ones which need to fire first in order to set things in motion. I attempted to set the "Successes" of these early productions, which I would like to fire first , and I have had some limited success, but I am not completely happy with this solution. I also tried turning off or on the global variables which control randomness at different points within productions (within the code), but this didn't work. Any other ideas? Troy From gray at gmu.edu Wed Mar 10 16:30:57 1999 From: gray at gmu.edu (Wayne Gray) Date: Wed, 10 Mar 1999 16:30:57 -0500 Subject: Post-Doc Position Message-ID: ACT-R Folks, Greetings. This is an unofficial announcement of a post-doctoral position that should materialize in the Applied Cognitive Program of George Mason University. We expect to be looking for someone with skill and promise in computational cognitive modeling. As we work primarily within an ACT-R framework, we are especially looking for people skilled in ACT-R. The ideal candidate would be John Anderson or Christian Lebiere; however, we do not expect them to apply. Less than ideal candidates should not feel intimidated. If you have some combination of research experience, programming skill, knowledge of ACT-R the theory, or knowledge of ACT-R the language then you should consider applying. Knowledge of other computational cognitive modeling systems plus a willingness to learn ACT-R is okay as well. If you are interested, please send a short email. If you have questions about the position, the program, or the research please ask. Cheers, Wayne _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Wayne D. Gray HUMAN FACTORS & APPLIED COGNITIVE PROGRAM SNAIL-MAIL ADDRESS (FedX et al) VOICE: +1 (703) 993-1357 George Mason University FAX: +1 (703) 993-1330 ARCH Lab/HFAC Program ********************* MSN 3f5 * Work is infinite, * Fairfax, VA 22030-4444 * time is finite, * http://hfac.gmu.edu * plan accordingly. * _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ From cl at andrew.cmu.edu Thu Mar 11 17:14:42 1999 From: cl at andrew.cmu.edu (Christian Lebiere) Date: Thu, 11 Mar 1999 17:14:42 -0500 Subject: 1999 ACT-R Summer School Message-ID: This is the final announcement for the 1999 ACT-R summer school. Applications are due April 1st. Since this is the first year that the summer school and workshop are not held together at CMU, the pointers to each event are summarized immediately below. ***************** SUMMER 1999 ACT-R EVENTS SUMMER SCHOOL at Carnegie Mellon University (Pittsburgh) http://act.psy.cmu.edu/ACT/ftp/workshop/announcement.html WORKSHOP at George Mason University (outside Washington DC) http://hfac.gmu.edu/~actr99 The WORKSHOP will begin one day after the SUMMER SCHOOL ends to facilitate travel from Pittsburgh to Washington, DC. ******************** SIXTH ANNUAL ACT-R SUMMER SCHOOL ================================ Carnegie Mellon University - July/August 1999 ============================================= ACT-R is a cognitive theory and simulation system for developing cognitive models for tasks that vary from simple reaction time to air traffic control. The most recent advances of the ACT-R theory were detailed in the recent book "The Atomic Components of Thought" by John R. Anderson and Christian Lebiere, published in 1998 by Lawrence Erlbaum Associates. Each year, a summer school and workshop are held to train researchers in the use of the system and to enable current users to exchange results and ideas. The 1999 ACT-R Summer School will be held at Carnegie Mellon University in Pittsburgh from July 27 to August 6. The 1999 ACT-R Workshop will be held from August 6 to 9 at George Mason University in Fairfax, VA, and will be the subject of a separate announcement. The summer school will take place from Tuesday July 27 to the morning of Friday August 6. This intensive 10-day course is designed to train researchers in the use of ACT-R for cognitive modeling. It is structured as a set of 7 units, with each unit lasting a day and involving a morning theory lecture, a web-based tutorial, an afternoon discussion session and a homework assignment which students are expected to complete during the day and evening. After a free day, the final two days of the summer school will be devoted to individual research projects. Computing facilities for the tutorials, assignments and research projects will be provided. Due to space considerations, admission is limited to a dozen participants, who must submit by APRIL 1 an application consisting of a curriculum vitae and statement of purpose. Applicants will be notified of admission by APRIL 15. Admission to the summer school is free. A stipend of up to $750 is available to graduate students for reimbursement of travel, housing and meal expenses. To qualify for the stipend, students must be US citizens and join to their application a letter of reference from a faculty member. Applications will be strengthened if the applicants include a description of a data set that they want to model at the summer school. If students are accepted to the summer school but do not bring a data set, they must be= prepared to work with other students on their data sets. Successful student projects will be presented at the workshop, which all summer school students are expected to attend. Transportation from the summer school to the workshop will be provided. An application form is appended below and is also available on the ACT-R web site (http://act.psy.cmu.edu/). ________________________________________________________ Sixth Annual ACT-R Summer School July 27 to August 6, 1999 at Carnegie Mellon University in Pittsburgh APPLICATION =========== Name: .................................................................. Address: .................................................................. .................................................................. .................................................................. Tel/Fax: .................................................................. Email: .................................................................. Applications are due APRIL 1. Acceptance will be notified by APRIL 15. Applicants MUST include a curriculum vitae and statement of purpose. A stipend of up to $750 is available for the reimbursement of travel, lodging and meal expenses (receipts needed). To qualify for the stipend, the applicant must be a graduate student with US citizenship and include a letter of reference from a faculty member. Check here to apply for stipend: ........ Housing is available in Resnick House, a CMU dormitory that offers suite-style accommodations. Rooms include air-conditioning, a semi-private bathroom and a common living room for suite-mates. This year's rates were $157.50/week/person or $31.50/night/person for single rooms and $118.00/week/person or $23.50/night/person for double rooms. Housing reservations will be taken after acceptance to the summer school. Do not send money. Send this form and other application materials by email, fax or mail to: 1999 ACT-R Summer School Psychology Department Attn: Helen Borek Baker Hall 345C Fax: +1 (412) 268-2844 Carnegie Mellon University Tel: +1 (412) 268-3438 Pittsburgh, PA 15213-3890 Email: helen+ at cmu.edu From jimmyd at cc.gatech.edu Wed Mar 24 10:37:59 1999 From: jimmyd at cc.gatech.edu (Jim Davies) Date: Wed, 24 Mar 1999 10:37:59 -0500 (EST) Subject: representation of input text in a conversation Message-ID: I have come up with how I will represent input text (for now, anyway). Here is how it works-- for the sentence "How are you" I have the chunks: (goal-2-1 ISA goal-talk sentence-id 2 word how-word next-word goal-2-2) (goal-2-2 ISA goal-talk sentence-id 2 word are-word next-word goal-2-3) (goal-2-3 ISA goal-talk sentence-id 2 word you-word next-word nil) What are the advantages? 1) The you can only focus on one goal at a time, so the other chunks involved in the sentence are just in memory like any other. To distinguish them from words in other sentences, they have an identifier. You can think of this as a timestamp of some sort like Christian suggested in his ACT-R/PM email. This is also how Anderson did it with his "parent" slot. This is an improvement from my original suggestion, in which the chain of words didn't know what sentence they belonged to. The model needs to know this, though, so it doesn't confuse the focused sentence with others in memory. 2) The next-word slot points to a chunk name, rather than an ordinal position, as in Anderson's suggestion: (a3 word you parent s1 position third) The reason I did it this was was because I wanted to be able to look for consecutive word combinations. For example, to determine if it were a question, you might want to look for a verb followed by a noun, like "do you" rather than "you do." With a chain of chunks, you can do this with one production. With the ordinal position, I guess you'd have to go through the whole list? But perhaps this is best, psychologically. Any comments? In defence of my position on this: If you represented the word "monkey" as the 8th element in the sentence "I have a huge collection of vervet monkey clothing," then you should be fast on verification that "monkey" is the 8th word. This sounds unlikely to me; I don't think we can retrieve that information. I would conjecture you would be faster at verifying that "clothing" followed "monkey" or that "vervet" came just before "monkey." Following is a perl script that takes in a file of sentences and outputs chunks in this manner. Feel free to use or modify it. #!/usr/local/bin/perl # program name: goal-maker.perl # author: jimmydavies at usa.net (Jim Davies) # version: 1.0 # Creates a series of chunks representing a sentence for ACT-R # modeling. It is a part of the primatech project: # http://www.cc.gatech.edu/~jimmyd/primatech/ # Each line # of the input file is a sentence to be represented. Each word # is put into a seperate chunk. Chunks are named goal-u-v # where u and v are integers # such that u is the sentence identifier (constant with all # the words in the sentence) and v is the word identifier. So # goal-3-5 means the fifth word in the third sentence. # # This program assumes that the chunk name for a word chunk whatever # is whatever-word, to distinguish it from the whatever-concept, # whatever-written-word, etc. # # run the program like this from a UNIX command line: # goal-maker.perl < filename # # If it doesn't work, check to make sure that the path at the top # is correct for where your version of perl is installed. # # EXAMPLE: # # So if the input file has: # hello kitty # how are you # # then the output of this script will be: # # (goal-1-1 # ISA goal-talk # sentence-id 1 # word hello-word # next-word goal-1-2) # # (goal-1-2 # ISA goal-talk # sentence-id 1 # word kitty-word # next-word nil) # # (goal-2-1 # ISA goal-talk # sentence-id 2 # word how-word # next-word goal-2-2) # # (goal-2-2 # ISA goal-talk # sentence-id 2 # word are-word # next-word goal-2-3) # # (goal-2-3 # ISA goal-talk # sentence-id 2 # word you-word # next-word nil) # begin script #initialize $sentence_id = 1; # read in the line as one single string while ($sentence_string = ) { # split the string into an array of words, splitting at # spaces @sentence = split(/ /,$sentence_string); # word_id is the word index in the current sentence # initialize. $word_id = 1; $next_word_id = 2; # this part gets the sentence length $sentence_length = 0; foreach $foo (@sentence){ $sentence_length++; } # loop through all the words in the sentence foreach $word (@sentence){ #create a start chunk print "(goal-$sentence_id-$word_id \n ISA goal-talk \n sentence-id $sent ence_id "; # if this is the last word in the sentence the next chunk will be nil if ($word_id == $sentence_length) { chop $word; print "\n word $word-word \n next-word nil)"; } # otherwise, link it to the next word else { print "\n word $word-word \n next-word goal-$sentence_id-$next_word_ id)"; } print "\n\n"; $word_id ++; $next_word_id ++; } # loop on words in the sentence $sentence_id ++; } # loop on sentences in the file From rick at cis.ohio-state.edu Thu Mar 25 11:28:34 1999 From: rick at cis.ohio-state.edu (Richard L. Lewis) Date: Thu, 25 Mar 1999 11:28:34 -0500 Subject: representation of input text in a conversation Message-ID: > The reason I did it this was was because I wanted to be able to look for > consecutive word combinations. For example, to determine if it were a > question, you > might want to look for a verb followed by a noun, like "do you" rather > than "you do." With a chain of chunks, you can do this with one > production. With the ordinal position, I guess you'd have to go through > the whole list? But perhaps this is best, psychologically. Any comments? In a sentence processing model I'm developing, I have adopted a positional encoding scheme as in the list STM model. But to handle many basic word-order issues such as "do you" vs. "you do", all that is needed is an ability to distinguish the current word from previous words. It is amazing how much parsing one can do with just this distinction (assuming incremental parsing). I assume that the current word being read is in a distinguished goal focus slot; this would be enough to tell apart "do you" from "you do". To be sure that the model retrieves the most recent word/constituent in those cases where it matters, the production just matches the 'position' tag to the current position. If the similarities are set up correctly (as in the serial recall model), the partial matching will penalize the more distant attachments more, ensuring retrieval of the more recent item. (There are some interesting cases that arise cross-linguistically where the correct attachment is in fact the more distant one; in those cases, you match on "first-position"). -Rick L. ----------------------------- Richard L. Lewis Assistant Professor  Department of Computer and Information Science Ohio State University 2015 Neil Avenue Columbus, OH 43210 rick at cis.ohio-state.edu http://www.cis.ohio-state.edu/~rick 614-292-2958 (voice) 614-292-2911 (fax) From bruno_emond at UQAH.UQuebec.CA Thu Mar 25 12:21:41 1999 From: bruno_emond at UQAH.UQuebec.CA (Bruno Emond) Date: Thu, 25 Mar 1999 12:21:41 -0500 Subject: representation of input text in a conversation Message-ID: Hi Jim. Here are some short comments. >What are the advantages? >1) The you can only focus on one goal at a time, so the other chunks >involved in the sentence are just in memory like any other. To distinguish >them from words in other sentences, they have an identifier. You can think >of this as a timestamp of some sort like Christian suggested in his >ACT-R/PM email. This is also how Anderson did it with his "parent" slot. > >This is an improvement from my original suggestion, in which the chain of >words didn't know what sentence they belonged to. The model needs to know >this, though, so it doesn't confuse the focused sentence with others in >memory. > I think there is some confusion here about the role of the sentence slot. The sentence slot is an indication of the group to which the word belong. I do not think it plays the role of a timestamp. Your timestamp is implicit in you next-word slot. >2) The next-word slot points to a chunk name, rather than >an ordinal position, as in Anderson's suggestion: >The reason I did it this was was because I wanted to be able to look for >consecutive word combinations. For example, to determine if it were a >question, you >might want to look for a verb followed by a noun, like "do you" rather >than "you do." With a chain of chunks, you can do this with one >production. With the ordinal position, I guess you'd have to go through >the whole list? But perhaps this is best, psychologically. Any comments? > It is not clear if your model assumes that the representation of words in sentences is limited to a linear organisation. If it is the case your framework might be ok. Although, if you want to model parsing, which one can say is a process of word grouping amoung other things, then you might run into expansive computations to represent explicitely not only the sequence of words but also the sequence of group of words. In my models of parsing I use an integer value representing the timestamp. As Richard L. Lewis mentions, the focus of a production is always the current word being processed and a retrieval of the adjacent word can be obtained either by computing the previous timestamp (!eval! (- Current-time 1)) or using similarities (Richard proposal). Bruno. *********************************************** Bruno Emond, Ph.D. bruno_emond at uqah.uquebec.ca Tel.:(819) 595-3900 1-4408 T=E9l=E9copieur: (819) 595-4459 - D=E9partement des Sciences de l'=C9ducation, Professeur Agr=E9g=E9 Universit=E9 du Qu=E9bec =E0 Hull. Case Postale 1250 succursale "B", Hull (Qu=E9bec), Canada. J8X 3X7 - Institute of Interdisciplinary Studies: Cognitive Science. Adjunct Research Professor Carleton University *********************************************** From jimmyd at cc.gatech.edu Thu Mar 25 16:19:53 1999 From: jimmyd at cc.gatech.edu (Jim Davies) Date: Thu, 25 Mar 1999 16:19:53 -0500 (EST) Subject: representation of input text in a conversation Message-ID: On Thu, 25 Mar 1999, Bruno Emond wrote: > Hi Jim. Here are some short comments. > > >What are the advantages? > >1) The you can only focus on one goal at a time, so the other chunks > >involved in the sentence are just in memory like any other. To distinguish > >them from words in other sentences, they have an identifier. You can think > >of this as a timestamp of some sort like Christian suggested in his > >ACT-R/PM email. This is also how Anderson did it with his "parent" slot. > > > >This is an improvement from my original suggestion, in which the chain of > >words didn't know what sentence they belonged to. The model needs to know > >this, though, so it doesn't confuse the focused sentence with others in > >memory. > > > > I think there is some confusion here about the role of the sentence slot. > The sentence slot is an indication of the group to which the word belong. > I do not think it plays the role of a timestamp. Your timestamp is > implicit in you next-word slot. I mean a timestamp in a larger sense-- the next word slot only tells the timestamp for the word relative to the other words in the same sentence, but the sentence-id tells the timestamp of the sentence relative to other sentences heard in the past. Else you need some other way to distinguish the later words of a sentence heard in the past from one you are hearing right now. I want this so that if I want to I can use what I know of sentences heard before to help understand the one I'm hearing now. JimDavies jimmyd at cc.gatech.edu home: 404-223-1366 http://www.cc.gatech.edu/~jimmyd/ From bruno_emond at UQAH.UQuebec.CA Tue Mar 30 11:08:16 1999 From: bruno_emond at UQAH.UQuebec.CA (Bruno Emond) Date: Tue, 30 Mar 1999 11:08:16 -0500 Subject: Language processing and ACT-R Message-ID: John Anderson in his season greetings last December mentionned that language processing was added as a major ACT-R research focus area. I do not know if this has been done before but I will be willing to compile and share a list of papers, models, chapters, current projects, etc... which has natural language processing as a theme. Bruno.