From cl at andrew.cmu.edu Mon Apr 5 16:52:04 1999 From: cl at andrew.cmu.edu (Christian Lebiere) Date: Mon, 05 Apr 1999 16:52:04 -0400 Subject: Research Programmer Position Available Message-ID: Research Programmer Position Available Department of Psychology Carnegie Mellon University We are looking for a full-time programmer to work on studies of eye movements during dynamic problem solving. The two tasks involve simulations of anti-air warfare coordinator and simulations of a controller for an unmanned flight simulator. Research tasks include collection and analysis of eye movement data, development of cognitive models, and development of training material. Knowledge of C++ and/or LISP is essential. The goal of the research is to use non-intrusive eye movement recording to improve training on high-performance systems. This is a position that will appeal to someone who wants to work with state-of-the art eye movement equipment or on state-of-the-art cognitive modeling. Contact ja0s at andrew.cmu.edu. From Wolfgang.Schoppek at uni-bayreuth.de Thu Apr 8 04:59:09 1999 From: Wolfgang.Schoppek at uni-bayreuth.de (Wolfgang Schoppek) Date: Thu, 08 Apr 1999 10:59:09 +0200 Subject: associative learning Message-ID: I have some difficulties with the associative learning mechanism. The task of my model is to observe successive states of a system consisting of four switches and four lamps. Later on the model does a recognition task. For every system state a chunk of the following form is created: goal-12 isa triplet switches swi03 lamps la03 context nil In the recognition section a probe is shown which looks like that (after some basic processing): probe1 isa probe swi1 swi03 la1 la03 swi2 nil la2 nil Since some states are shown only once, the baselevels of the corresponding triplets are very low. Therefore retrieval depends strongly on the associative weights between the swi0x and la0x chunks and the triplets (representing system states). My first problem: Because of continued creation of new chunks and the Prior Strength Equation S*ji = ln(m) - ln(n) states shown later are stronger associated with their components as compared to states shown earlier. That effect is due to the fact that initially there are only about 75 chunks in declarative memory and cannot be found in the data. - Is the assumption of the initial number of chunks <100 realistic? (If it were 1000 chunks initially, the unwanted effect would almost disappear.) - Has anybody encountered a similar problem? - Are there solutions to circumvent the effect? My second problem: Consider goal-12 as an example. When the slots of goal-12 are filled with the chunks swi03 and la03 there are IAs between swi03 / la03 and goal-12 of about 3.7 ( ln(84) - ln(2) ). In the next cycle a subgoal is pushed - which does not affect the IAs. But after the first cycle of the subgoal the IAs between swi03 / la03 and goal-12 suddenly drop to 2.5 although none of the above chunks is involved in the processing of the subgoal. -> ??? After popping the subgoal there are a few cycles of idleness (goal-12 on top of the stack) coming along with further falling of the IAs (I understand that). After popping goal-12 the IAs are about 1.5 (that depends on the number of idle cycles). - Can anybody explain me the sudden drop of IAs in the first cycle of the subgoal? (I cannot find the answer in the 1998 book.) In one version of my model turning off associative learning improves the fit. That would not be so bad if not one version of the model (simulating another experimental condition) made extensive use of that learning mechanism. -- Wolfgang -------------------------------------------------------------------- Dr. Wolfgang Schoppek <<< Tel.: +49 921 555003 <<< Lehrstuhl fuer Psychologie, Universitaet Bayreuth, 95440 Bayreuth http://www.uni-bayreuth.de/departments/psychologie/wolfgang.htm -------------------------------------------------------------------- From Wolfgang.Schoppek at uni-bayreuth.de Thu Apr 8 06:47:46 1999 From: Wolfgang.Schoppek at uni-bayreuth.de (Wolfgang Schoppek) Date: Thu, 08 Apr 1999 12:47:46 +0200 Subject: associative learning Message-ID: In the meantime I have found out something about activation sources: Typing "(activation-sources)" after firing a production that pops a subgoal returns the chunks of the last subgoal although the goal stack viewer tells me that the subgoal had already been popped. As far as I can see in the source-spread items of the declarative memory viewer this causes trouble in the spread of activation. - Is this a bug? (I use the ACT-R 4.0 Environment for Allegro CL 3) - If so, could it cause the sudden drop in the IAs? -- Wolfgang -------------------------------------------------------------------- Dr. Wolfgang Schoppek <<< Tel.: +49 921 555003 <<< Lehrstuhl fuer Psychologie, Universitaet Bayreuth, 95440 Bayreuth http://www.uni-bayreuth.de/departments/psychologie/wolfgang.htm -------------------------------------------------------------------- From cl at andrew.cmu.edu Thu Apr 8 10:01:35 1999 From: cl at andrew.cmu.edu (Christian Lebiere) Date: Thu, 08 Apr 1999 10:01:35 -0400 Subject: associative learning Message-ID: > My first problem: > Because of continued creation of new chunks and the Prior Strength > Equation S*ji = ln(m) - ln(n) states shown later are stronger associated > with their components as compared to states shown earlier. That effect > is due to the fact that initially there are only about 75 chunks in > declarative memory and cannot be found in the data. > - Is the assumption of the initial number of chunks <100 realistic? > (If it were 1000 chunks initially, the unwanted effect would almost > disappear.) > - Has anybody encountered a similar problem? > - Are there solutions to circumvent the effect? No. Yes. Yes. There are probably many more than 100 chunks in your brain, so m should be much larger than that to reflect the entire declarative memory (though it is usually unclear what the right value of n would be). One fix that has been used, especially in the list learning models, is to set the total number of chunks (m) at some number. This can be done with: (setf *wme-number* 100) However, ACT-R will keep adding or removing chunks, thus constantly changing that number. To prevent that from happening, you can define a function that resets the number to the constant value and make that function the value of one of the hook functions that is called at every iteration, e.g. *cycle-hook-fn*. This is the resulting code from the Murdock model: (defun reset-wme-number-murdock (instantiation) (declare (ignore instantiation)) (setf *wme-number* 100)) (setf *cycle-hook-fn* #'reset-wme-number-murdock) > - Can anybody explain me the sudden drop of IAs in the first cycle of > the subgoal? (I cannot find the answer in the 1998 book.) No idea. I would have to try the actual model to be able to trace it. > In the meantime I have found out something about activation sources: > Typing "(activation-sources)" after firing a production that pops a > subgoal returns the chunks of the last subgoal although the goal stack > viewer tells me that the subgoal had already been popped. As far as I > can see in the source-spread items of the declarative memory viewer this > causes trouble in the spread of activation. > - Is this a bug? (I use the ACT-R 4.0 Environment for Allegro CL 3) > - If so, could it cause the sudden drop in the IAs? It would be a bug (which might conceivably cause the IA problem), but I cannot reproduce it using plain ACT-R 4.0. I will take up the problem with our PC-master. Christian From Wolfgang.Schoppek at uni-bayreuth.de Thu Apr 8 10:17:58 1999 From: Wolfgang.Schoppek at uni-bayreuth.de (Wolfgang Schoppek) Date: Thu, 08 Apr 1999 16:17:58 +0200 Subject: associative learning Message-ID: > > In the meantime I have found out something about activation sources: > > Typing "(activation-sources)" after firing a production that pops a > > subgoal returns the chunks of the last subgoal although the goal stack > > viewer tells me that the subgoal had already been popped. As far as I > > can see in the source-spread items of the declarative memory viewer this > > causes trouble in the spread of activation. > > - Is this a bug? (I use the ACT-R 4.0 Environment for Allegro CL 3) > > - If so, could it cause the sudden drop in the IAs? > > It would be a bug (which might conceivably cause the IA problem), but I > cannot reproduce it using plain ACT-R 4.0. I will take up the problem with > our PC-master. > I have found another constraint for that error. It occurs only if the subgoal is popped with failure, not if popped regularly. -- Wolfgang From altmann at osf1.gmu.edu Thu Apr 8 11:13:43 1999 From: altmann at osf1.gmu.edu (ERIK M. ALTMANN) Date: Thu, 8 Apr 1999 11:13:43 -0400 (EDT) Subject: associative learning Message-ID: > My first problem: > Because of continued creation of new chunks and the Prior Strength > Equation S*ji = ln(m) - ln(n) states shown later are stronger associated > with their components as compared to states shown earlier. That effect > is due to the fact that initially there are only about 75 chunks in > declarative memory and cannot be found in the data. > - Is the assumption of the initial number of chunks <100 realistic? > (If it were 1000 chunks initially, the unwanted effect would almost > disappear.) > - Has anybody encountered a similar problem? > - Are there solutions to circumvent the effect? I set *wme-number* to about 1000 at the start of a simulation. Correct performance in my model depends on retrieving the newest out of many chunks. These are created in fairly close succession during the simulation (every 5-10 seconds), so there's not much spare time for old chunks to decay. With too few chunks in the system initially, chunks created early in the simulation have IAs so high that they overwhelm the effects of base-level decay and the system fails. I figure the role of sleep is to keep ~1000 < m < ~10000. Erik. --------------------------- Erik M. Altmann ARCH Lab 2E5 George Mason University Fairfax, VA 22030 703-993-1326 hfac.gmu.edu/~altmann --------------------------- From gray at gmu.edu Fri Apr 9 13:06:09 1999 From: gray at gmu.edu (Wayne Gray) Date: Fri, 9 Apr 1999 13:06:09 -0400 Subject: ACT-R WORKSHOP Message-ID: 2nd Notice -- 2nd Notice -- 2nd Notice -- 2nd Notice -- 2nd Notice -- 2nd Notice http:/hfac.gmu.edu/~actr99 _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ _/_/ _/_/ _/_/ SIXTH ACT-R WORKSHOP _/_/ _/_/ http:/hfac.gmu.edu/~actr99 _/_/ _/_/ _/_/ _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ @George Mason University, Fairfax, VA @George Mason University, Fairfax, VA @George Mason University, Fairfax, VA @George Mason University, Fairfax, VA @George Mason University, Fairfax, VA August 6 to 9, 1999 August 6 to 9, 1999 August 6 to 9, 1999 August 6 to 9, 1999 August 6 to 9, 1999 http:/hfac.gmu.edu/~actr99 http:/hfac.gmu.edu/~actr99 http:/hfac.gmu.edu/~actr99 2nd Notice -- 2nd Notice -- 2nd Notice -- 2nd Notice -- 2nd Notice -- 2nd Notice _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Wayne D. Gray HUMAN FACTORS & APPLIED COGNITIVE PROGRAM SNAIL-MAIL ADDRESS (FedX et al) VOICE: +1 (703) 993-1357 George Mason University FAX: +1 (703) 993-1330 ARCH Lab/HFAC Program ********************* MSN 3f5 * Work is infinite, * Fairfax, VA 22030-4444 * time is finite, * http://hfac.gmu.edu * plan accordingly. * _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ From apetrov+ at andrew.cmu.edu Sun Apr 11 23:57:39 1999 From: apetrov+ at andrew.cmu.edu (Alex Petrov) Date: Sun, 11 Apr 1999 23:57:39 -0400 Subject: Competing productions Message-ID: Folks, A month ago there was a discussion on the list about random production firing. Jim Davies wrote: > I have 2 productions that I want to fire at random. That > is, approximately 50% of the time p1 fires, and 50% of > the time p2 fires. I came across a more general problem. Working on it, I developed a small experimental model and ran some simulations. I think the results are of general interest and therefore post them to the list. In a nutshell, I needed a mechanism for rehearsing a given chunk a few times. One solution would be to set explicit counters and push-and-pop the chunk a fixed number of times. This solution, however, does not have the right flavor for me. Therefore, I decided to have two productions: REHEARSE and STOP-REHEARSING. REHEARSE rehearses once and leaves the stack intact, thus opening the possibility for more rehearsals. STOP-REHEARSING pops the goal and terminates the process. If the probability for REHEARSE is P and for STOP-REHEARSING is 1-P, the number of rehearsals will follow a geometric distribution with mean M=p/(1-p). Thus, when P=0.5 the mean number of rehearsals is M=1. This in effect is the 50/50 competition of the previous messages. To have more rehearsals one must establish a higher value for P. So, the problem boils down to having two productions that have unequal chances to fire. One way to do that is the following: Enable rational analysis -- (sgp :era T) Turn on the expected-gain noise -- (sgp :egs 0.25) Leave the Q parameter of the REHEARSE production at its default value of 1. Set Q for STOP-REHEARSING to some value less than 1 -- (spp (stop-rehearsing :q 0.995)). Note that this parameter is very sensitive. This setting entails that the two competing productions have unequal expected gain E=PG-C. The conflict resolution mechanism favors the production with higher exp.gain, thus firing them with unequal probabilities. The question of course is to calculate the value of Q that yields the desired probabilities. A little algebra and the Conflict Resolution Equation 3.4 (page 65 in "The Atomic Components of Thought") give the following relationship: E2 - E1 = s * sqrt2 * [log(P2) - log(P1)] or delta_E = s * sqrt2 * log(M) , where P1 is the probability that STOP-REHEARSING fires, P2 is the prob. that REHEARSE fires, P1+P2=1, E1 and E2 are the expected gains, s is the noise parameter, M=P2/P1 is the mean number of rehearsals, and sqrt2 is the square root of 2. Given that E = P*G-C = Q*R*G - C and assuming that R1=R2=1, C1=C2, and Q2=1, we can calculate Q1 from M: s * sqrt2 Q-stop = 1 - log(M) --------- G Note that when G=20 (the default) and s=0.25 (a recommended value), the factor s*sqrt2/G is 0.01768. This makes the Q parameter very sensitive -- changes in the second decimal place have dramatic effects: P2 M delta-E Q-stop --------------------------------- 0.50 1.00 0.000 1.0000 0.60 1.50 0.143 0.9928 0.67 2.00 0.245 0.9877 0.75 3.00 0.388 0.9806 0.80 4.00 0.490 0.9755 0.90 9.00 0.777 0.9612 0.95 19.00 1.041 0.9479 0.99 99.00 1.625 0.9188 I wrote a small model to test all this. Some transcripts are listed below. The runnable LISP file is appended to the end of this message. Just load it into the LISP environment (after loading ACT-R, of course). The main function is DO-EXPERIMENT. Best, Alex ------------------------------------------------------------- Alexander Alexandrov Petrov apetrov+ at andrew.cmu.edu apetrov at cogs.nbu.acad.bg Graduate student Dept of Psychology, CMU Baker Hall 455A, (412) 268-8112 In your practice always keep in your thoughts the interaction of heaven and earth, water and fire, yin and yang. Morihei Ueshiba O-Sensei, The Founder of Aikido ------------------------------------------------------------- ? (do-experiment 1000 :M 1) ; i.e. 50/50 competition 2017 production firings total. 1017 REHEARSE firings; p = 0.504 1000 STOP-REHEARSING firings; p = 0.496 Mean number of rehearsals 1.02, variance 2.02, std.dev 1.422 Distribution: 0 1 2 3 4 5 6 7 8 9 10+ 494 250 128 64 34 17 1 6 4 2 0 ? (do-experiment 1000 :p 0.6) 2565 production firings total. 1565 REHEARSE firings; p = 0.610 1000 STOP-REHEARSING firings; p = 0.390 Mean number of rehearsals 1.56, variance 3.87, std.dev 1.968 Distribution: 0 1 2 3 4 5 6 7 8 9 10+ 385 231 155 97 53 33 16 9 9 4 8 ? (do-experiment 1000 :M 2) 2905 production firings total. 1905 REHEARSE firings; p = 0.656 1000 STOP-REHEARSING firings; p = 0.344 Mean number of rehearsals 1.91, variance 5.58, std.dev 2.363 Distribution: 0 1 2 3 4 5 6 7 8 9 10+ 339 228 143 110 65 39 26 15 17 9 9 ? (do-experiment 1000 :Q-stop 0.95 :bins 15) 18471 production firings total. 17471 REHEARSE firings; p = 0.946 1000 STOP-REHEARSING firings; p = 0.054 Mean number of rehearsals 17.47, variance 354.66, std.dev 18.832 Distribution: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14+ 59 56 44 44 49 39 39 37 33 34 39 31 24 26 446 =========================================================================== ========================== ;;; -*- Mode: Lisp; Syntax: Common-Lisp; Package: CL-User; Base: 10 -*- ;;; FILE: prod_competn.act ;;; VERSION: 1.0 ;;; PURPOSE: Explore production competition in ACT-R. ;;; DEPENDS-ON: ACT-R.lisp ;;; PROGRAMMER: Alexander Alexandrov Petrov (apetrov+ at andrew.cmu.edu) ;;; CREATED: 10-Apr-99 [1.0] ;;; UPDATED: ... ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;;;; ;;;; ;;;; P R O D U C T I O N C O M P E T I T I O N ;;;; ;;;; ;;;; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;;; Counters -------------------------------------------------- (defvar *stop-count* 0 "Total number of STOP-REHEARSING firings." ) (defvar *reh-count* 0 "Total number of REHEARSE firings." ) (defvar *rehearsals* 0 "Number of rehearsals on the current trial." ) (defun reset-all-counters () (setq *stop-count* 0 *reh-count* 0 *rehearsals* 0 )) ;;; ACT-R global parameters ---------------------------------- (clear-all) (sgp :era t ; Enable rational analysis. :G 20.0 ; Value of the top-level goal (default=20.0). :egs 0.25 ; Expected gain noise. :ut 0.0 ; Utility threshold (default=0.0). ) ;;; Chunks --------------------------------------------------- (chunk-type rehearsal-goal ) ; no slots (add-DM (rehearsal-goal isa rehearsal-goal)) ;;; Production definitions ----------------------------------- (p stop-rehearsing =goal> isa rehearsal-goal ==> !eval! (incf *stop-count*) !pop! ) (p rehearse =goal> isa rehearsal-goal ==> !eval! (incf *reh-count*) !eval! (incf *rehearsals*) ) ;;; Production parameters ----------------------------------- (spp (rehearse :q 1.0 :r 1.0 )) (spp (stop-rehearsing :q 1.0 :r 1.0 )) ; Q modified below ;;; Parameter-setting functions ----------------------------- (defconstant sqrt2 (sqrt 2.0) "Square root of 2" ) (defun delta-E-for-M (M) "Expected-gain difference that will generate M rehearsals on average." (let ((s (no-output (first (sgp :egs))))) (* s sqrt2 (log M)) )) (defun delta-E-for-p (p) "Expected-gain difference that will fire REHEARSE with probability P." ;; The number of rehearsals has a geometric distribution. ;; Expected mean value is M = p/q, where q=1-p. Variance is p/(q^2). (assert (and (numberp p) (< 0.0 p 1.0))) (let ((M (/ p (- 1.0 p)))) (delta-E-for-M M) )) (defun q-stop (delta-E) "Q parameter for STOP-REHEARSING that entails DELTA-E." ;; E = PG - C = (q*r)*G - (a+b) ;; delta-E = delta-Q * G , assuming all Rs are 1 and the costs are equal. (let ((G (no-output (first (sgp :G)))) (q-rehearse (no-output (first (first (spp (rehearse :q)))))) ) (- q-rehearse (/ delta-E G)) )) (defun set-Q-stop (q-stop) "Set the Q parameter of the production STOP-REHEARSING." (assert (and (numberp q-stop) (<= 0.0 q-stop 1.0))) (let ((result (spp-fct (list 'stop-rehearsing :q q-stop)))) (first (first result)) )) ;;; Simulation experiment ------------------------------ (defun do-trial () "Push REHEARSAL-GOAL, run the model, and return the number of rehearsals." (setq *rehearsals* 0) (goal-focus rehearsal-goal) (no-output (run)) *rehearsals* ) (defun do-experiment (trials &key (M nil) (p nil) (delta-E nil) (q-stop nil) (s nil) (G nil) (reload nil) (bins 11) (report T) ) "Initialize parameters, run ACT-R model, and print summary statistics." (initialize-experiment reload s G M p delta-E q-stop) (let ((results (do-experiment-aux trials bins))) ; list of 6 values (if report (report-experiment results) ; returns no values results) )) (defun do-experiment-aux (trials bins) (let ((sum 0) (sumsq 0) (distr (make-array bins :initial-element 0)) (last-bin (- bins 1)) (verbose (no-output (first (sgp :v)))) ) (sgp :v nil) ; turn VERBOSE switch off (dotimes (trial trials) (let ((reh (do-trial))) (incf sum reh) (incf sumsq (* reh reh)) (incf (svref distr (min reh last-bin))) )) (sgp-fct (list :v verbose)) ; restore VERBOSE to its original value (list trials *stop-count* *reh-count* sum sumsq distr ))) (defun initialize-experiment (reload s G M p delta-E q-stop) "Prepare for DO-EXPERIMENT." (cond (reload (reload) ) (t (seed) (setq *time* 0.0)) ) (reset-all-counters) (when s ; noise level stated explicitly (sgp-fct (list :egs s)) ) (when G ; Goal value stated explicitly (sgp-fct (list :G G)) ) (cond (q-stop ; Q for STOP-REHEARSING stated explicitly (set-Q-stop q-stop)) ; ignore DELTA-E, M, and p (delta-E ; DELTA-E stated explicitly (set-Q-stop (q-stop delta-E))) ; ignore M and p (M ; M stated explicitly (set-Q-stop (q-stop (delta-E-for-M M)))) ; ignore p (p ; p stated explicitly (set-Q-stop (q-stop (delta-E-for-p p)))) (t (error "One of M, P, DELTA-E, or P must be specified.")) )) (defun report-experiment (results) (let* ((trials (first results)) (stop-count (second results)) (reh-count (third results)) (sum (fourth results)) (sumsq (fifth results)) (distrib (sixth results)) (P-stop (/ stop-count (+ stop-count reh-count))) (P-reh (/ reh-count (+ stop-count reh-count))) (mean-reh (/ sum trials)) (var-reh (variance trials sum sumsq)) ) (unless (eq trials stop-count) (warn "TRIALS (=~S) not equal to STOP-COUNT (=~S)." trials stop-count )) (format t "~& ~4D production firings total.~%" (+ stop-count reh-count)) (format t " ~4D REHEARSE firings; p = ~5,3F~%" reh-count P-reh ) (format t " ~4D STOP-REHEARSING firings; p = ~5,3F~%" stop-count P-stop) (format t " Mean number of rehearsals ~5,2F, variance ~5,2F, std.dev ~5,3F~%" mean-reh var-reh (sqrt var-reh)) (report-distribution distrib) (values) )) (defun report-distribution (distrib) "DISTRIB is an array of frequencies." (let ((bins (length distrib))) (format t " Distribution: ") (dotimes (bin bins) (format t "~5D" bin)) (format t "+~% ") (dotimes (bin bins) (format t "~5D" (svref distrib bin))) (terpri) )) (defun variance (N sum sumsq) (/ (- sumsq (/ (* sum sum) N)) N )) ;;;;;;; End of file ACT-R/Examples/prod_competn.act From kcoulter at eecs.umich.edu Tue Apr 13 10:23:33 1999 From: kcoulter at eecs.umich.edu (Karen J. Coulter) Date: Tue, 13 Apr 1999 10:23:33 -0400 (EDT) Subject: Soar Workshop Announcement Message-ID: The Artificial Intelligence Laboratory at the University of Michigan is pleased to announce the 19th North American Soar Workshop to be held May 21 - 23, 1999, in Ann Arbor, MI. The Soar Workshop is a weekend of intensive interaction and exchange on the Soar architecture. All researchers -- faculty, staff, students, developers -- are given the opportunity to describe their research and to discuss whatever Soar issues are of interest to them and the community. In conjunction with the 19th Workshop, we plan to offer 2 - 3 hands-on tutorial sessions on the Thursday and Friday (May 20 & 21) prior to the workshop. On Thursday and Friday, there will be a 2-day Introduction to Soar for Soar Novices. On Friday we will also offer a 1-day tutorial for those already familiar with Soar, that will cover Soar 8 in the morning, and how to create Soar simulation environments, in the afternoon. 1. Introduction to Soar - Thurs/Fri May 20 & 21 2. Soar8 for Soarers - Fri May 21 morning 3. Creating a Soar Simulation Environment - Fri May 21 afternoon These tutorials will be taught in the state-of-the-art electronic classrooms at the University of Michigan Media Union. There will not be a registration fee charged for the tutorials, but advanced registration will be required. The deadline for tutorial registration is April 30, 1999. A small materials fee ($10) will be charged to cover the cost of the tutorial materials. The official web page for the 19th Soar Workshop is: http://ai.eecs.umich.edu/soar/workshop19.html All workshop, hotel, tutorial and additional information is available on the web page. A registration form is available there as well. Please register by April 30. ----- End Included Message ----- From gray at gmu.edu Wed Apr 14 13:30:23 1999 From: gray at gmu.edu (Wayne Gray) Date: Wed, 14 Apr 1999 13:30:23 -0400 Subject: Research Position at GMU Message-ID: ****************** The Applied Cognitive Program of George Mason University is searching for a post-doctoral researcher skilled in cognitive modeling and cognitive science. Current projects include building Integrated process models of perception, cognition, and action. The ideal candidate will have created one or more computational cognitive models within a unified architecture of cognition (e.g., ACT-R, CAPS, C-I, EPIC, or Soar) and will have compared the performance of his or her model(s) to empirical data. (As we work primarily within an ACT-R framework, we are especially looking for people skilled in ACT-R. However, knowledge of other computational cognitive modeling systems plus a willingness to learn ACT-R will be considered.) Additional factors that will be weighed but not required include proficiency in LISP and proficiency in the design and analysis of behavioral research experiments. Experience with eye tracking studies and their data analysis are a plus. We expect that the successful candidate will have a Ph.D. in Psychology or Computer Science but Ph.D.'s in other areas can be considered. SALARY RANGE: $24,000 to $30,000 per 12 mon George Mason University is located in Fairfax, VA a short (about 15 miles) drive from Washington, DC. All queries should be sent to Wayne D. Gray at gray at gmu.edu. CV's, publications, and statements of interests can be sent electronically or at the address listed below. Letters of reference are not necessary at this time, but will be solicited later from candidates whose qualifications and interests seem most appropriate. Applications will be reviewed beginning May 3rd; applications will be considered until the position is filled. ********************************* _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Wayne D. Gray HUMAN FACTORS & APPLIED COGNITIVE PROGRAM SNAIL-MAIL ADDRESS (FedX et al) VOICE: +1 (703) 993-1357 George Mason University FAX: +1 (703) 993-1330 ARCH Lab/HFAC Program ********************* MSN 3f5 * Work is infinite, * Fairfax, VA 22030-4444 * time is finite, * http://hfac.gmu.edu * plan accordingly. * _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ From frg at psyc.nott.ac.uk Tue Apr 20 13:37:28 1999 From: frg at psyc.nott.ac.uk (Fernand Gobet) Date: Tue, 20 Apr 1999 17:37:28 +0000 Subject: Research position on modelling of expertise Message-ID: UNIVERSITY OF NOTTINGHAM, SCHOOL OF PSYCHOLOGY POSTDOCTORAL RESEARCH ASSISTANT EXPERT MEMORY Applications are invited for the above post to work on an ESRC-funded project on expert memory. The project will involve data collection and computer modelling of chess players' memory. The person appointed will be expected to carry out empirical studies and contribute to the refinement of a computer model of chess expertise. Candidates should have a PhD in computer science, cognitive science, or psychology, and have good computing skills. Knowledge of Lisp or of a similar computer language is required. Salary will be within the range #15,735 - #20,867 per annum, depending on qualifications and experience. This post is available from June 1999 and will be offered on a fixed-term contract for a period of one year. Information about related projects is available on the WWW at: http://www.psychology.nottingham.ac.uk/research/credit/projects/chess_expertise/ . Candidates should send a detailed CV, a statement of research interests and two letters of recommendation, to Dr F Gobet, School of Psychology, The University of Nottingham, University Park, Nottingham, NG7 2RD. Tel: 0115 951 5402. Fax: 0115 951 5324. Email: Fernand.Gobet at Nottingham.ac.uk.