From sara.karimii at gmail.com Sun Aug 1 04:49:54 2010 From: sara.karimii at gmail.com (sara karimi) Date: Sun, 1 Aug 2010 13:19:54 +0430 Subject: [ACT-R-users] ACT-R psychologiacl plausibility Message-ID: Dear All, I have a question about psychological plausibility of the ACT-R and its components. are there any references which discuss various components of the architecture and their capabilities based on the psychological findings? Thanks in Advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From paria_khorshid at yahoo.com Sun Aug 1 05:11:23 2010 From: paria_khorshid at yahoo.com (paria shams) Date: Sun, 1 Aug 2010 02:11:23 -0700 (PDT) Subject: [ACT-R-users] act-r interface Message-ID: <607395.32875.qm@web36501.mail.mud.yahoo.com> hello i want to have interface for my act-r project but i dont know how to conect it with c# or java !!i mean?, which programing language should i have work to design interface for act-r? how to design interface for it? is it possible to design interface in c# and connect it to lisp?act-r ? thanks with best regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From db30 at andrew.cmu.edu Sun Aug 1 17:51:26 2010 From: db30 at andrew.cmu.edu (Dan Bothell) Date: Sun, 01 Aug 2010 17:51:26 -0400 Subject: [ACT-R-users] ACT-R psychologiacl plausibility In-Reply-To: References: Message-ID: --On Sunday, August 01, 2010 1:19 PM +0430 sara karimi wrote: > > Dear All, > I have a question about psychological plausibility of the ACT-R and its components. > are?there any references which discuss various components of the architecture and their > capabilities based on the psychological findings? > Thanks in Advance. There are several papers and books which discuss the psychological basis for ACT-R listed under "Theory" on the Publications page of the ACT-R web site: . One in particular which is available directly on the ACT-R web site is "An integrated theory of the mind" which can be downloaded from: Hope that helps, Dan From db30 at andrew.cmu.edu Sun Aug 1 19:32:39 2010 From: db30 at andrew.cmu.edu (Dan Bothell) Date: Sun, 01 Aug 2010 19:32:39 -0400 Subject: [ACT-R-users] act-r interface In-Reply-To: <607395.32875.qm@web36501.mail.mud.yahoo.com> References: <607395.32875.qm@web36501.mail.mud.yahoo.com> Message-ID: <6FF87AE1B764BC6979661E47@[192.168.1.10]> --On Sunday, August 01, 2010 2:11 AM -0700 paria shams wrote: > > hello > i want to have interface for my act-r project but i dont know how to conect it with c# or java > !!i mean , which programing language should i have work to design interface for act-r? > how to design interface for it? > is it possible to design interface in c# and connect it to lisp act-r ? > thanks > with best regards > There are essentially two pieces to that. One is how to create an interface for ACT-R to some arbitrary task, and the other is how to connect a Lisp running ACT-R to some external system if. For the first piece what is required is to create a "device" for ACT-R. A device is used to provide the perceptual input to ACT-R and to accept the motor actions which ACT-R creates. By default ACT-R comes with devices for basic interactions with the GUIs in some Lisp applications (ACL, LispWorks, and MCL) as well as a simple virtual GUI which will work in any Lisp. To go along with those devices there is also a small set of GUI tools which can be used to create an interface in any of the Lisps which will run ACT-R. Those GUI tools are described in the ACT-R tutorial units' experiment description documents. If you want to create your own device, then you can find the documentation in the ACT-R reference manual and the Power Point presentation titled "extending-actr.ppt" found in the docs directory of the ACT-R 6 distribution. There are also example devices which go along with that Power Point presentation which can be found in the examples directory of the ACT-R 6 distribution. The second issue, how to connect Lisp to some external language/system, however does not have an easy answer. There are basically no tools built into ACT-R for doing so because every Lisp application is going to have its own interfaces and libraries for supporting linking to other systems or performing inter-process communications. Thus, you will have to consult your Lisp's documentation to determine what it supports and how to use that to connect with your external task. The only thing which ACT-R does make available is an abstraction of the commands for opening, reading from, and writing to passive text based TCP/IP sockets in several Lisps (ACL, LispWorks, MCL, OpenMCL/CCL, CMUCL, and SBCL). Those functions can be found in the file called "uni-files.lisp" in the support directory of the ACT-R 6 distribution. Hope that helps, and let me know if you have other questions, Dan From pavel at dit.unitn.it Mon Aug 2 12:00:25 2010 From: pavel at dit.unitn.it (Pavel Shvaiko) Date: Mon, 2 Aug 2010 18:00:25 +0200 Subject: [ACT-R-users] 3rd CFP: ISWC'10 workshop on Ontology Matching (OM-2010) Message-ID: Apologies for cross-postings -------------------------------------------------------------------------- CALL FOR PAPERS -------------------------------------------------------------------------- The Fifth International Workshop on ONTOLOGY MATCHING (OM-2010) http://om2010.ontologymatching.org/ November 7, 2010, ISWC'10 Workshop Program, Shanghai, China BRIEF DESCRIPTION AND OBJECTIVES Ontology matching is a key interoperability enabler for the Semantic Web, as well as a useful tactic in some classical data integration tasks. It takes the ontologies as input and determines as output an alignment, that is, a set of correspondences between the semantically related entities of those ontologies. These correspondences can be used for various tasks, such as ontology merging and data translation. Thus, matching ontologies enables the knowledge and data expressed in the matched ontologies to interoperate. The workshop has two goals: 1. To bring together leaders from academia, industry and user institutions to assess how academic advances are addressing real-world requirements. The workshop will strive to improve academic awareness of industrial and final user needs, and therefore direct research towards those needs. Simultaneously, the workshop will serve to inform industry and user representatives about existing research efforts that may meet their requirements. The workshop will also investigate how the ontology matching technology is going to evolve. 2. To conduct an extensive and rigorous evaluation of ontology matching approaches through the OAEI (Ontology Alignment Evaluation Initiative) 2010 campaign: http://oaei.ontologymatching.org/2010/. The particular focus of this year's OAEI campaign is on real-world specific matching tasks involving, e.g., biomedical ontologies and linked data. Therefore, the ontology matching evaluation initiative itself will provide a solid ground for discussion of how well the current approaches are meeting business needs. TOPICS of interest include but are not limited to: Business and use cases for matching (e.g., open linked data); Requirements to matching from specific domains; Application of matching techniques in real-world scenarios; Formal foundations and frameworks for ontology matching; Ontology matching patterns; Instance matching; Large-scale ontology matching evaluation; Performance of matching techniques; Matcher selection and self-configuration; Uncertainty in ontology matching; User involvement (including both technical and organizational aspects); Explanations in matching; Social and collaborative matching; Alignment management; Reasoning with alignments; Matching for traditional applications (e.g., information integration); Matching for dynamic applications (e.g., search, web-services). SUBMISSIONS Contributions to the workshop can be made in terms of technical papers and posters/statements of interest addressing different issues of ontology matching as well as participating in the OAEI 2010 campaign. Technical papers should be not longer than 12 pages using the LNCS Style: http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0 Posters/statements of interest should not exceed 2 pages and should be handled according to the guidelines for technical papers. All contributions should be prepared in PDF format and should be submitted through the workshop submission site at: http://www.easychair.org/conferences/?conf=om20100 Contributors to the OAEI 2010 campaign have to follow the campaign conditions and schedule at http://oaei.ontologymatching.org/2010/. *TENTATIVE* IMPORTANT DATES FOR TECHNICAL PAPERS AND POSTERS: September 1, 2010: Deadline for the submission of papers. September 27, 2010: Deadline for the notification of acceptance/rejection. October 12, 2010: Workshop camera ready copy submission. November 7, 2010: OM-2010, Shanghai International Convention Center, Shanghai, China ORGANIZING COMMITTEE 1. Pavel Shvaiko (Main contact) TasLab, Informatica Trentina SpA, Italy 2. J?r?me Euzenat INRIA & LIG, France 3. Fausto Giunchiglia University of Trento, Italy 4. Heiner Stuckenschmidt University of Mannheim, Germany 5. Ming Mao SAP Labs, USA 6. Isabel Cruz The University of Illinois at Chicago, USA PROGRAM COMMITTEE Paolo Besana, Universite de Rennes 1, France Olivier Bodenreider, National Library of Medicine, USA Marco Combetto, Informatica Trentina, Italy J?r?me David, INRIA & LIG, France AnHai Doan, University of Wisconsin and Kosmix Corp., USA Alfio Ferrara, Universita degli Studi di Milano, Italy Tom Heath, Talis, UK Wei Hu, Nanjing University, China Ryutaro Ichise, National Institute of Informatics, Japan Antoine Isaac, Vrije Universiteit Amsterdam, Netherlands Krzysztof Janowicz, Pennsylvania State University, USA Bin He, IBM, USA Yannis Kalfoglou, Ricoh Europe plc, UK Monika Lanzenberger, Vienna University of Technology, Austria Patrick Lambrix, Link?pings Universitet, Sweden Maurizio Lenzerini, University of Rome - Sapienza, Italy Juanzi Li, Tsinghua University, China Augusto Mabboni, Business Process Engineering, Italy Vincenzo Maltese, University of Trento, Italy Fiona McNeill, University of Edinburgh, UK Christian Meilicke, University of Mannheim, Germany Luca Mion, Informatica Trentina, Italy Peter Mork, The MITRE Corporation, USA Filippo Nardelli, Cogito, Italy Natasha Noy, Stanford University, USA Leo Obrst, The MITRE Corporation, USA Yefei Peng, Yahoo Labs, USA Erhard Rahm, University of Leipzig, Germany Fran?ois Scharffe, INRIA, France Luciano Serafini, Fondazione Bruno Kessler (IRST), Italy Kavitha Srinivas, IBM, USA Umberto Straccia, ISTI-C.N.R., Italy Andrei Tamilin, Fondazione Bruno Kessler (IRST), Italy Cassia Trojahn dos Santos, INRIA, France Lorenzino Vaccari, EC DG Environment, Italy Ludger van Elst, DFKI, Germany Yannis Velegrakis, University of Trento, Italy Shenghui Wang, Vrije Universiteit Amsterdam, Italy Baoshi Yan, Bosch Research, USA Rui Zhang, Jilin University, China Songmao Zhang, Chinese Academy of Sciences, China ------------------------------------------------------- More about ontology matching: http://www.ontologymatching.org/ http://book.ontologymatching.org/ ------------------------------------------------------- Best Regards, Pavel ------------------------------------------------------- Pavel Shvaiko, PhD Innovation and Research Project Manager TasLab, Informatica Trentina SpA, Italy http://www.ontologymatching.org/ http://www.infotn.it/ http://www.dit.unitn.it/~pavel/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From amharrison at gmail.com Mon Aug 2 17:50:49 2010 From: amharrison at gmail.com (Anthony M. Harrison) Date: Mon, 2 Aug 2010 17:50:49 -0400 Subject: [ACT-R-users] act-r interface In-Reply-To: References: Message-ID: <021F09BB-32F2-45D9-B99F-E6447E5C3F5A@gmail.com> > hello > i want to have interface for my act-r project but i dont know how to conect it with c# or java !!i mean?, which programing language should i have work to design interface for act-r? > how to design interface for it? > is it possible to design interface in c# and connect it to lisp?act-r ? > thanks > with best regards > Yet another option would be jACT-R. It is mostly feature complete* and has numerous avenues for integration. A few links worth looking at: - General Info http://jactr.org/node/51 http://jactr.org/node/2 - Interfacing - to "the world" (simulation or physical platform) http://jactr.org/node/62 http://jactr.org/node/96 - to software (an API based interface) http://jactr.org/node/23 - Divergences http://jactr.org/node/34 There are enough people using it to interface with systems (typically in more applied contexts) that I really should get the mailing list up and running. * I'm reluctant to publicize until jACT-R has full theoretic parity with the lisp. It's still lacking production compilation and some motor commands. From phil at cs.tu-berlin.de Fri Aug 6 14:41:47 2010 From: phil at cs.tu-berlin.de (phil) Date: Fri, 6 Aug 2010 20:41:47 +0200 Subject: [ACT-R-users] act-r interface In-Reply-To: <607395.32875.qm@web36501.mail.mud.yahoo.com> References: <607395.32875.qm@web36501.mail.mud.yahoo.com> Message-ID: Hi Paria, for interfacing ACT-R with Java you can use the following tool: http://www.zmms.tu-berlin.de/kogmod/tools/hello-java.html It works the same way Dan described it and it is open source. So if you want to interface c# or any other programming language, you can still use the lisp device file from this tool and write an additional package in the specific programming language you use. If you do so let me know, because we want to enhance Hello Java for making it possible to interface ACT-R with different programming languages. If you have any questions, don't hesitate to write me. Phil 2010/8/1 paria shams > hello > i want to have interface for my act-r project but i dont know how to conect > it with c# or java !!i mean , which programing language should i have work > to design interface for act-r? > how to design interface for it? > is it possible to design interface in c# and connect it to lisp act-r ? > thanks > with best regards > > > _______________________________________________ > ACT-R-users mailing list > ACT-R-users at act-r.psy.cmu.edu > http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pavel at dit.unitn.it Mon Aug 16 14:48:47 2010 From: pavel at dit.unitn.it (Pavel Shvaiko) Date: Mon, 16 Aug 2010 20:48:47 +0200 Subject: [ACT-R-users] Final CFP: ISWC'10 workshop on Ontology Matching (OM-2010): submission deadline is approaching - 15 days left Message-ID: Apologies for cross-postings ---------------------------------------------------------------------------- FINAL CALL FOR PAPERS: submission deadline is approaching: 15 days left ---------------------------------------------------------------------------- The Fifth International Workshop on ONTOLOGY MATCHING (OM-2010) http://om2010.ontologymatching.org/ November 7, 2010, ISWC'10 Workshop Program, Shanghai, China BRIEF DESCRIPTION AND OBJECTIVES Ontology matching is a key interoperability enabler for the Semantic Web, as well as a useful tactic in some classical data integration tasks. It takes the ontologies as input and determines as output an alignment, that is, a set of correspondences between the semantically related entities of those ontologies. These correspondences can be used for various tasks, such as ontology merging and data translation. Thus, matching ontologies enables the knowledge and data expressed in the matched ontologies to interoperate. The workshop has two goals: 1. To bring together leaders from academia, industry and user institutions to assess how academic advances are addressing real-world requirements. The workshop will strive to improve academic awareness of industrial and final user needs, and therefore direct research towards those needs. Simultaneously, the workshop will serve to inform industry and user representatives about existing research efforts that may meet their requirements. The workshop will also investigate how the ontology matching technology is going to evolve. 2. To conduct an extensive and rigorous evaluation of ontology matching approaches through the OAEI (Ontology Alignment Evaluation Initiative) 2010 campaign: http://oaei.ontologymatching.org/2010/. The particular focus of this year's OAEI campaign is on real-world specific matching tasks involving, e.g., biomedical ontologies and linked data. Therefore, the ontology matching evaluation initiative itself will provide a solid ground for discussion of how well the current approaches are meeting business needs. TOPICS of interest include but are not limited to: Business and use cases for matching (e.g., open linked data); Requirements to matching from specific domains; Application of matching techniques in real-world scenarios; Formal foundations and frameworks for ontology matching; Ontology matching patterns; Instance matching; Large-scale ontology matching evaluation; Performance of matching techniques; Matcher selection and self-configuration; Uncertainty in ontology matching; User involvement (including both technical and organizational aspects); Explanations in matching; Social and collaborative matching; Alignment management; Reasoning with alignments; Matching for traditional applications (e.g., information integration); Matching for dynamic applications (e.g., search, web-services). SUBMISSIONS Contributions to the workshop can be made in terms of technical papers and posters/statements of interest addressing different issues of ontology matching as well as participating in the OAEI 2010 campaign. Technical papers should be not longer than 12 pages using the LNCS Style: http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0 Posters/statements of interest should not exceed 2 pages and should be handled according to the guidelines for technical papers. All contributions should be prepared in PDF format and should be submitted through the workshop submission site at: http://www.easychair.org/conferences/?conf=om20100 Contributors to the OAEI 2010 campaign have to follow the campaign conditions and schedule at http://oaei.ontologymatching.org/2010/. IMPORTANT DATES FOR TECHNICAL PAPERS AND POSTERS: September 1, 2010: Deadline for the submission of papers. September 27, 2010: Deadline for the notification of acceptance/rejection. October 12, 2010: Workshop camera ready copy submission. November 7, 2010: OM-2010, Shanghai International Convention Center, Shanghai, China ORGANIZING COMMITTEE 1. Pavel Shvaiko (Main contact) TasLab, Informatica Trentina SpA, Italy 2. J?r?me Euzenat INRIA & LIG, France 3. Fausto Giunchiglia University of Trento, Italy 4. Heiner Stuckenschmidt University of Mannheim, Germany 5. Ming Mao SAP Labs, USA 6. Isabel Cruz The University of Illinois at Chicago, USA PROGRAM COMMITTEE Paolo Besana, Universite de Rennes 1, France Olivier Bodenreider, National Library of Medicine, USA Marco Combetto, Informatica Trentina, Italy J?r?me David, INRIA & LIG, France AnHai Doan, University of Wisconsin and Kosmix Corp., USA Alfio Ferrara, Universita degli Studi di Milano, Italy Tom Heath, Talis, UK Wei Hu, Nanjing University, China Ryutaro Ichise, National Institute of Informatics, Japan Antoine Isaac, Vrije Universiteit Amsterdam, Netherlands Krzysztof Janowicz, Pennsylvania State University, USA Bin He, IBM, USA Yannis Kalfoglou, Ricoh Europe plc, UK Monika Lanzenberger, Vienna University of Technology, Austria Patrick Lambrix, Link?pings Universitet, Sweden Maurizio Lenzerini, University of Rome - Sapienza, Italy Juanzi Li, Tsinghua University, China Augusto Mabboni, Business Process Engineering, Italy Vincenzo Maltese, University of Trento, Italy Fiona McNeill, University of Edinburgh, UK Christian Meilicke, University of Mannheim, Germany Luca Mion, Informatica Trentina, Italy Peter Mork, The MITRE Corporation, USA Filippo Nardelli, Cogito, Italy Natasha Noy, Stanford University, USA Leo Obrst, The MITRE Corporation, USA Yefei Peng, Yahoo Labs, USA Erhard Rahm, University of Leipzig, Germany Fran?ois Scharffe, INRIA, France Luciano Serafini, Fondazione Bruno Kessler (IRST), Italy Kavitha Srinivas, IBM, USA Umberto Straccia, ISTI-C.N.R., Italy Andrei Tamilin, Fondazione Bruno Kessler (IRST), Italy Cassia Trojahn dos Santos, INRIA, France Lorenzino Vaccari, EC DG Environment, Italy Ludger van Elst, DFKI, Germany Yannis Velegrakis, University of Trento, Italy Shenghui Wang, Vrije Universiteit Amsterdam, Italy Baoshi Yan, Bosch Research, USA Rui Zhang, Jilin University, China Songmao Zhang, Chinese Academy of Sciences, China ------------------------------------------------------- More about ontology matching: http://www.ontologymatching.org/ http://book.ontologymatching.org/ ------------------------------------------------------- Best Regards, Pavel ------------------------------------------------------- Pavel Shvaiko, PhD Innovation and Research Project Manager TasLab, Informatica Trentina SpA, Italy http://www.ontologymatching.org/ http://www.infotn.it/ http://www.dit.unitn.it/~pavel/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From y3rr0r at etri.re.kr Wed Aug 18 02:42:09 2010 From: y3rr0r at etri.re.kr (=?ks_c_5601-1987?B?udrDosf2?=) Date: Wed, 18 Aug 2010 15:42:09 +0900 Subject: [ACT-R-users] ACT-R and Visual attention problem Message-ID: <9FCA25A68967437AB075B5D46F74B68B@etri.info> An HTML attachment was scrubbed... URL: From db30 at andrew.cmu.edu Wed Aug 18 09:53:13 2010 From: db30 at andrew.cmu.edu (db30 at andrew.cmu.edu) Date: Wed, 18 Aug 2010 09:53:13 -0400 Subject: [ACT-R-users] ACT-R and Visual attention problem In-Reply-To: <9FCA25A68967437AB075B5D46F74B68B@etri.info> References: <9FCA25A68967437AB075B5D46F74B68B@etri.info> Message-ID: <952C72A4BBB477A8443D0369@act-r6.cmu.edu> --On Wednesday, August 18, 2010 3:42 PM +0900 "=?ks_c_5601-1987?B?udrDosf2?=" wrote: > > > Dear all, > > I have met the ACT-R recently because of my project. > A partial goal of the project is to find an attention region when an > arbitrary picture(such as street, subway, school, etc) is presented to a > monitor. > > My question is... > 1) Can the act-r be inputted a picture? No, there are no mechanisms built into ACT-R for processing arbitrary images. ACT-R's interaction with the world occurs through what is called a device, and it is the device which generates the features and objects which the model can see. The provided devices allow the model to see some simple GUI elements (text, buttons, and lines) which are drawn using either the GUI systems built into some Lisps or through a virtual GUI system built into ACT-R. The commands for using the virtual GUI are described in the experiment description documents of the tutorial units. If one wants other input or different features then it is possible to write a new device for the system to provide the visual components to a model. That new device is then responsible for parsing whatever external representation of the world is desired and creating the chunks which the vision module will use. Documentation on creating a new device can be found in the docs directory in the presentation titled "extending-actr.ppt". I know that some researchers have attempted to build more general image processing devices for ACT-R, but as far as I know none of those efforts are currently available as working systems. > 2) According to tutorial 2, act-r can find an attended location. What is > the criterion for finding the attended location? > 3) Can the act-r find an attended area in the inputted picture? > Sorry, I don't really understand what you're looking for with these two questions. I suspect that given the answer to the first one they are not really relevant, but if that is not the case then please feel free to elaborate on what you would like to know and ask again. Hope that helps, Dan From ritter at ist.psu.edu Wed Aug 18 10:24:55 2010 From: ritter at ist.psu.edu (Frank Ritter) Date: Wed, 18 Aug 2010 10:24:55 -0400 Subject: [ACT-R-users] ACT-R and Visual attention problem In-Reply-To: <952C72A4BBB477A8443D0369@act-r6.cmu.edu> References: <9FCA25A68967437AB075B5D46F74B68B@etri.info> <952C72A4BBB477A8443D0369@act-r6.cmu.edu> Message-ID: At 09:53 -0400 18/8/10, wrote: >--On Wednesday, August 18, 2010 3:42 PM +0900 >"-chang hyun" wrote: > > Dear all, >> >> I have met the ACT-R recently because of my project. >> A partial goal of the project is to find an attention region when an >> arbitrary picture(such as street, subway, school, etc) is presented to a >> monitor. >> >> My question is... >> 1) Can the act-r be inputted a picture? > >No, there are no mechanisms built into ACT-R for processing arbitrary >images. > >ACT-R's interaction with the world occurs through what is called a device, >and it is the device which generates the features and objects which the >model can see. The provided devices allow the model to see some simple >GUI elements (text, buttons, and lines) which are drawn using either the >GUI systems built into some Lisps or through a virtual GUI system built >into ACT-R. The commands for using the virtual GUI are described in the >experiment description documents of the tutorial units. > >If one wants other input or different features then it is possible to write >a new device for the system to provide the visual components to a model. >That new device is then responsible for parsing whatever external >representation of the world is desired and creating the chunks which the >vision module will use. Documentation on creating a new device can be found >in the docs directory in the presentation titled "extending-actr.ppt". > >I know that some researchers have attempted to build more general image >processing devices for ACT-R, but as far as I know none of those efforts >are currently available as working systems. > >> 2) According to tutorial 2, act-r can find an attended location. What is >> the criterion for finding the attended location? >> 3) Can the act-r find an attended area in the inputted picture? >> > >Sorry, I don't really understand what you're looking for with these >two questions. I suspect that given the answer to the first one they >are not really relevant, but if that is not the case then please >feel free to elaborate on what you would like to know and ask again. > >Hope that helps, >Dan Further to Dan's useful comments, there are three ways this can be done. (this is reviewed in a two papers, http://acs.ist.psu.edu/papers/ritterBJY00.pdf and http://acs.ist.psu.edu/papers/bassbr95.pdf 1) you model in ACT-R that it will or has seen something. This is somewhat like doing something in your head. 2) you use a display built with tools included with ACT-R (the former /PM tools, now included with ACT-R). This subjects can see and the model can see, but you have to duplicate the display. Not trivial, but possible. The tutorials have examples, I believe. If you can find an example in the tutorials related to what you want to do, creating a model becomes much easier. 3) you get SegMan from Rob St. Amant, or recreate it. Segman connects ACT-R to a graphic display by parsing a bitmap. We've used it, and papers at http://acs.ist.psu.edu/papers/ that have St. Amant as a co-author use it. This would be more work, currently, but ultimately, I think when combined with ACT-R/PM, more satisfying. cheers, Frank From marc.halbruegge at gmx.de Wed Aug 18 15:32:15 2010 From: marc.halbruegge at gmx.de (Marc =?ISO-8859-1?Q?Halbr=FCgge?=) Date: Wed, 18 Aug 2010 21:32:15 +0200 Subject: [ACT-R-users] ACT-R and Visual attention problem In-Reply-To: References: <9FCA25A68967437AB075B5D46F74B68B@etri.info> <952C72A4BBB477A8443D0369@act-r6.cmu.edu> Message-ID: <1282159935.1814.3.camel@mahal-i5> Hi, just for completeness, here's another way to input arbitrary pictures (from a webcam for example) into ACT-R: http://act-cv.sourceforge.net/ You'd need some background in computer vision and c++, though Best, Marc Am Mittwoch, den 18.08.2010, 10:24 -0400 schrieb Frank Ritter: > At 09:53 -0400 18/8/10, wrote: > >--On Wednesday, August 18, 2010 3:42 PM +0900 > >"-chang hyun" wrote: > > > Dear all, > >> > >> I have met the ACT-R recently because of my project. > >> A partial goal of the project is to find an attention region when an > >> arbitrary picture(such as street, subway, school, etc) is presented to a > >> monitor. > >> > >> My question is... > >> 1) Can the act-r be inputted a picture? > > > >No, there are no mechanisms built into ACT-R for processing arbitrary > >images. > > > >ACT-R's interaction with the world occurs through what is called a device, > >and it is the device which generates the features and objects which the > >model can see. The provided devices allow the model to see some simple > >GUI elements (text, buttons, and lines) which are drawn using either the > >GUI systems built into some Lisps or through a virtual GUI system built > >into ACT-R. The commands for using the virtual GUI are described in the > >experiment description documents of the tutorial units. > > > >If one wants other input or different features then it is possible to write > >a new device for the system to provide the visual components to a model. > >That new device is then responsible for parsing whatever external > >representation of the world is desired and creating the chunks which the > >vision module will use. Documentation on creating a new device can be found > >in the docs directory in the presentation titled "extending-actr.ppt". > > > >I know that some researchers have attempted to build more general image > >processing devices for ACT-R, but as far as I know none of those efforts > >are currently available as working systems. > > > >> 2) According to tutorial 2, act-r can find an attended location. What is > >> the criterion for finding the attended location? > >> 3) Can the act-r find an attended area in the inputted picture? > >> > > > >Sorry, I don't really understand what you're looking for with these > >two questions. I suspect that given the answer to the first one they > >are not really relevant, but if that is not the case then please > >feel free to elaborate on what you would like to know and ask again. > > > >Hope that helps, > >Dan > > Further to Dan's useful comments, there are three ways this can be > done. (this is reviewed in a two papers, > http://acs.ist.psu.edu/papers/ritterBJY00.pdf and > http://acs.ist.psu.edu/papers/bassbr95.pdf > 1) you model in ACT-R that it will or has seen something. This is > somewhat like doing something in your head. > > 2) you use a display built with tools included with ACT-R (the > former /PM tools, now included with ACT-R). This subjects can see > and the model can see, but you have to duplicate the display. Not > trivial, but possible. The tutorials have examples, I believe. If > you can find an example in the tutorials related to what you want to > do, creating a model becomes much easier. > > 3) you get SegMan from Rob St. Amant, or recreate it. Segman > connects ACT-R to a graphic display by parsing a bitmap. We've used > it, and papers at http://acs.ist.psu.edu/papers/ that have St. Amant > as a co-author use it. This would be more work, currently, but > ultimately, I think when combined with ACT-R/PM, more satisfying. > > > cheers, > > Frank > _______________________________________________ > ACT-R-users mailing list > ACT-R-users at act-r.psy.cmu.edu > http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users From y3rr0r at etri.re.kr Wed Aug 18 20:59:22 2010 From: y3rr0r at etri.re.kr (=?ks_c_5601-1987?B?udrDosf2?=) Date: Thu, 19 Aug 2010 09:59:22 +0900 Subject: [ACT-R-users] [Re]ACT-R and Visual attention problem Message-ID: <47C86A2C38784080884C79DEE1F80816@etri.info> An HTML attachment was scrubbed... URL: From db30 at andrew.cmu.edu Thu Aug 19 00:56:55 2010 From: db30 at andrew.cmu.edu (Dan Bothell) Date: Thu, 19 Aug 2010 00:56:55 -0400 Subject: [ACT-R-users] [Re]ACT-R and Visual attention problem In-Reply-To: <47C86A2C38784080884C79DEE1F80816@etri.info> References: <47C86A2C38784080884C79DEE1F80816@etri.info> Message-ID: <3F0195C0A450A362CB380C81@[192.168.1.10]> --On Thursday, August 19, 2010 9:59 AM +0900 "=?ks_c_5601-1987?B?udrDosf2?=" wrote: > *For 2,3 questions > What i want to know is ....How can the vision module(of the act-r) find an attention region ? > -> colors different to near area ? contrast ? disparity ? and so on? > How a model finds an item to attend depends on the features which the device makes available for the objects. For the built in devices the features are screen position (x and y coordinates), color, height, width, size, and kind. If one creates their own device then they can make any features they want available to the model in the feature chunks. If you have not done so already, I would suggest reading through and working on the exercises in units 2 and 3 of the ACT-R tutorial because they cover the standard operation of the vision module. Dan From marc.halbruegge at gmx.de Thu Aug 19 03:23:51 2010 From: marc.halbruegge at gmx.de (Marc =?ISO-8859-1?Q?Halbr=FCgge?=) Date: Thu, 19 Aug 2010 09:23:51 +0200 Subject: [ACT-R-users] [Re]ACT-R and Visual attention problem In-Reply-To: <47C86A2C38784080884C79DEE1F80816@etri.info> References: <47C86A2C38784080884C79DEE1F80816@etri.info> Message-ID: <1282202631.1832.8.camel@mahal-i5> Hi again, > My goal is to find an attended area of any picture with endogenous and > exogenous attention. So, actually I was looking for ways to be > inputted an image and parse the bitmap. I'm going to try your > suggestions(All gave me an appropriate information. thanks!!!). If I > get further information, I'll report it. > > *For 2,3 questions > What i want to know is ....How can the vision module(of the > act-r) find an attention region ? > -> colors different to near area ? contrast ? disparity ? and so > on? ACT-R does not build a saliency map of the image for you. If this is what you're looking for, this might be of interest: Tingting Xu , Thomas Pototschnig , Kolja K?hnlenz , Martin Buss, A high-speed multi-GPU implementation of bottom-up attention using CUDA, Proceedings of the 2009 IEEE international conference on Robotics and Automation, p.1120-1126, May 12-17, 2009, Kobe, Japan http://www.lsr.ei.tum.de/fileadmin/publications/Xu_2009_ICRA_GPU.pdf Best, Marc From y3rr0r at etri.re.kr Fri Aug 20 03:27:24 2010 From: y3rr0r at etri.re.kr (=?ks_c_5601-1987?B?udrDosf2?=) Date: Fri, 20 Aug 2010 16:27:24 +0900 Subject: [ACT-R-users] [Re] ACT-R and Visual attention problem Message-ID: <8E523C1B4806471DA28449BA7648B544@etri.info> An HTML attachment was scrubbed... URL: From db30 at andrew.cmu.edu Fri Aug 20 10:52:59 2010 From: db30 at andrew.cmu.edu (db30 at andrew.cmu.edu) Date: Fri, 20 Aug 2010 10:52:59 -0400 Subject: [ACT-R-users] [Re] ACT-R and Visual attention problem In-Reply-To: <8E523C1B4806471DA28449BA7648B544@etri.info> References: <8E523C1B4806471DA28449BA7648B544@etri.info> Message-ID: --On Friday, August 20, 2010 4:27 PM +0900 "=?ks_c_5601-1987?B?udrDosf2?=" wrote: > > > Hi, > > After reading Dan's first reply about my question, I've studied about how > to make a new device. > Then, some questions were occurred from on the start line. > > 1. What is a device? > According to the reference manual, device is the world with which a model > interacts. > and it says that hardware like computer can be a device. until here, I > understood!. > then, what is a new device Dan said? > a picture itself can be a new device? > Or, what Dan said is that I should make a new device interface for a > picture on a monitor? > Conceptually a device is exactly what the manual says: "the world with which a model interacts". It provides the visual information to the model from the world and takes the model's motor actions and passes them on to the world. The built in devices for ACT-R provide the model with a very simple computer interface as its world. >From the software perspective the device is any Lisp object which has a specific set of methods defined for it. Those methods provide the model with the visual location and visual object chunks that it can see and accept the motor actions which the model performs. Thus, if you want a model to be able to see a picture you will have to create a device which can convert a picture into chunks which the model can use. That device will also have to handle any motor actions the model makes to do what is appropriate for the task. > 2. In the ppt 'extending-actr' > To tell the truth, I'm a stranger to the LISP language and > ACT-R.(ofcourse, I've studied a little about those for several days) > anyway, so I hope you understand my status...:-) > I've seen the example(page 24, titled "actually creating and using one). > But I don't understand that. > I know it makes several chunk types, and define chunks for several > figures. > and install-device. > Is it all about making a new device? > I don't understand. > Please explain how to make a new device. > What is meaning of making a new device? > Page 24 is not an example of how to create any device, just how to create and use the example abstract device which is described in the slides that lead up to it (pp. 16-23). That device just presents the model with a set of visual chunks which are created in advance by the modeler (as is done in the example on p. 24) and prints out the model's motor actions as they occur. Only the last three steps of the example on p. 24 are relevant to all devices: it must be installed, proc-display tells the model to process the visual scene, and the model needs to be run. So, to make a new device you will have to decide on some "object" which you want to use to represent your device and that will be installed for the model. That can be essentially anything -- a list, a pathname, a true Lisp object, or whatever you feel is appropriate to do the work needed. Then you will have to write the eight methods required for a device, as described in those slides, to provide the necessary interface for the model. Alternatively, you could attempt to extend one of the built in ACT-R devices that interface to the specific Lisp GUI systems or the virtual windows. However, that can be more difficult than starting from scratch since it also requires understanding how those devices work. Either of those is going to require writing Lisp code and an understanding of how the visual and motor modules of ACT-R operate. So, before trying to write a device you will probably want to work through the ACT-R tutorial to get a solid understanding of the ACT-R basics. You will probably also need to work through some Lisp book or tutorial so that you will be able to write the Lisp code to do what you want. Dan From troy.kelley at us.army.mil Fri Aug 20 14:34:53 2010 From: troy.kelley at us.army.mil (Kelley, Troy (Civ,ARL/HRED)) Date: Fri, 20 Aug 2010 14:34:53 -0400 Subject: [ACT-R-users] [Re]ACT-R and Visual attention problem (UNCLASSIFIED) In-Reply-To: <1282202631.1832.8.camel@mahal-i5> References: <47C86A2C38784080884C79DEE1F80816@etri.info> <1282202631.1832.8.camel@mahal-i5> Message-ID: <2D30123DFDFF1046B3A9CF64B6D9AC9061652E@ARLABML03.DS.ARL.ARMY.MIL> Classification: UNCLASSIFIED Caveats: NONE There is also some good saliency map work done by Laurent Itti at USC, and most of his code is available on the net. See here http://ilab.usc.edu/bu/ for his bottom up visual attention code in C++. I would be nice to see this integrated with ACT-R. Troy D. Kelley RDRL-HRS-E Human Research and Engineering Directorate (HRED) U.S. Army Research Laboratory Aberdeen, MD 21005 Phone: 410-278-5869 -----Original Message----- From: act-r-users-bounces at act-r.psy.cmu.edu [mailto:act-r-users-bounces at act-r.psy.cmu.edu] On Behalf Of Marc Halbr?gge Sent: Thursday, August 19, 2010 3:24 AM To: act-r mailinglist Subject: Re: [ACT-R-users] [Re]ACT-R and Visual attention problem Hi again, > My goal is to find an attended area of any picture with endogenous and > exogenous attention. So, actually I was looking for ways to be > inputted an image and parse the bitmap. I'm going to try your > suggestions(All gave me an appropriate information. thanks!!!). If I > get further information, I'll report it. > > *For 2,3 questions > What i want to know is ....How can the vision module(of the > act-r) find an attention region ? > -> colors different to near area ? contrast ? disparity ? and so > on? ACT-R does not build a saliency map of the image for you. If this is what you're looking for, this might be of interest: Tingting Xu , Thomas Pototschnig , Kolja K?hnlenz , Martin Buss, A high-speed multi-GPU implementation of bottom-up attention using CUDA, Proceedings of the 2009 IEEE international conference on Robotics and Automation, p.1120-1126, May 12-17, 2009, Kobe, Japan http://www.lsr.ei.tum.de/fileadmin/publications/Xu_2009_ICRA_GPU.pdf Best, Marc _______________________________________________ ACT-R-users mailing list ACT-R-users at act-r.psy.cmu.edu http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users Classification: UNCLASSIFIED Caveats: NONE -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 5208 bytes Desc: not available URL: From y3rr0r at etri.re.kr Mon Aug 23 21:40:40 2010 From: y3rr0r at etri.re.kr (=?ks_c_5601-1987?B?udrDosf2?=) Date: Tue, 24 Aug 2010 10:40:40 +0900 Subject: [ACT-R-users] [Re]ACT-R and Visual attention problem Message-ID: An HTML attachment was scrubbed... URL: From db30 at andrew.cmu.edu Mon Aug 23 22:50:54 2010 From: db30 at andrew.cmu.edu (Dan Bothell) Date: Mon, 23 Aug 2010 22:50:54 -0400 Subject: [ACT-R-users] [Re]ACT-R and Visual attention problem In-Reply-To: References: Message-ID: <5625750AEAE727CF9140C612@[192.168.1.10]> --On Tuesday, August 24, 2010 10:40 AM +0900 "=?ks_c_5601-1987?B?udrDosf2?=" wrote: > > Hi, > > Great thanks for answers. > I'm examining those suggestions (Dan, Frank, Marc, Troy's ). > Firstly, I'm trying to understand Dan's method. But don't understand yet. > As Dan said, I need more understanding for act-r system and lisp language. > > Anyway, 3 questions (for Dan's answer) are also occurred now. > 1. Role of a device? > -->You said "If you want a model to be able to see a picture, you will have to create a device > which can convert a picture into chunks which the model can use." > --> This sentence let me understand the role of a device and show me how to be inputted a > picture. > --> (Except for ACT-CV, SegMan, Vision C++ toolkit and so on) > Only way for a model to get a picture is a converted chunk of the picture? > --> For that, Should our new device convert any picture to several chunks automatically? > --> For example) there are people and cars in a picture. then should the device convert the > picture into oval, rectangular, line, etc chunks? > --> Of course, I know there is another role of a device such as Interfacing role. > ACT-R's vision module is a model of visual attention and assumes the world is composed of "objects". All the model sees are the feature and object chunks which the device makes available. It has no real commitment as to what features those objects should have. It is up to the modeler to decide what those features and objects are for any new type of input, and creating those items is the role of the device. > 2. I can't see any device. > --> For understanding about a device, I wish to see an example source code for a device. > --> In order to make a new device..... > --> But I can't find it. > --> Please Can't you show me an example? The source code for the default devices are found in the /devices directory in separate directories for each of the particular systems available (ACL, LispWorks, MCL, and the virtual windows). The Lisp code which implements the example device is on the slides of the "extending-actr" presentation, and can also be found in the /examples/simple-new-device.lisp file. There are three other example devices which show some more advanced techniques one can use with a device found in the "new-vision*.lisp" files in the examples directory. > > 3. like 'Add-text-to-exp-window' > --> I saw the GUI function 'Add-text-to-exp-window' > --> This function inspires me. > --> like that function, If we make a 'add-picture-to-exp-window' function, the first step of our > goal will be completed. > --> and then, a device needed to convert the picture into chunks. > --> Is it right? > --> Then, Do you know how to create a GUI function? > The ACT-R GUI functions (everything related to "experiment windows") are built for use with the default devices. So, one could write an "add-picture" command if one knows how to use the particular windowing implementation being used in that Lisp and how the device that goes with it works. The code for the ACT-R GUI commands is found with the devices in the /devices directories. I will note however that the internal details of the ACT-R code are generally below the level at which I provide support. So, you are basically on your own if you want to modify or extend things at that level instead of writing your own device. Dan From CSIE2011CFP at cust.edu.cn Mon Aug 23 23:27:53 2010 From: CSIE2011CFP at cust.edu.cn (Mingfen Li) Date: Tue, 24 Aug 2010 11:27:53 +0800 Subject: [ACT-R-users] Congress on Computer Science/Engineering, Changchun, China (EI Compendex/ISTP/IEEE Xplore) Message-ID: <482619308.03750@cust.edu.cn> Dear Author, 2011 2nd World Congress on Computer Science and Information Engineering (CSIE 2011) 17-19 June 2011? Changchun, China http://world-research-institute.org/conferences/CSIE/2011 Call for Papers & Exhibits CSIE 2011 intends to be a global forum for researchers and engineers to present and discuss recent advances and new techniques in computer science and information engineering. Topics of interests include, but are not limited to, data mining & data engineering, intelligent systems, software engineering, computer applications, communications & networking, computer hardware, VLSI, & embedded systems, multimedia & signal processing, computer control, robotics, and automation. All papers in the CSIE 2011 conference proceedings will be indexed in Ei Compendex and ISTP, as well as included in the IEEE Xplore (The previous conference CSIE 2009 has already been indexed in Ei Compendex and included in the IEEE Xplore). IEEE Catalog Number: CFP1160F-PRT. ISBN: 978-1-4244-8361-7. Changchun is the capital city of Jilin province, situated in the central section of China's northeast region. There are many natural attractions to entertain residents and visitors around Changchun. The grand Changbai Mountain renowned for its spectacular landscape, charming scenery, glamorous legends, as well as rich resources and products, has been praised as the first mountain in the northeast, outstanding as one of the China?s top-ten famous mountains. Other attractions in or around Changchun include Songhua lake (Songhuahu), Jingyue Lake (Jingyuetan), Changchun Movie Wonderland, Changchun Puppet Palace (Weihuanggong), Changchun World Sculpture Park, and Changchun World Landscape Park, etc. Important Dates: Paper Submission Deadline: 20 September 2010 Review Notification: 15 November 2010 Final Paper and Author Registration Deadline: 6 January 2011 Contact Information If you have any inquiries, please email us at CSIE2011 at cust.edu.cn Please feel free to forward to others. To unsubscribe, please reply with ?unsubscribe act-r-users at andrew.cmu.edu ? as your email subject. With kind regards, Mingfen Li CSIE 2011 Committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From mc.isv at cbs.dk Tue Aug 24 12:59:52 2010 From: mc.isv at cbs.dk (Michael Carl) Date: Tue, 24 Aug 2010 18:59:52 +0200 Subject: [ACT-R-users] Model of writing Message-ID: Hello, I have writing data (from keystroke logging) and there seems to be some regularities across different writers, which I'd like to simulate in a (low-level) model of writing. The ACT-R tutorial in unit2 describes a typing event to take more than 500ms but our data has also instances of inter-keystroke delay of 50ms (or even less). The tutorial says that this "press-key request obviously does not model the typing skills of an expert typist", true, but do you know whether there is any such attempt to model low-level typing behaviour (in ACT-R), similar maybe to the reading model? Michael From bej at cs.cmu.edu Tue Aug 24 13:18:03 2010 From: bej at cs.cmu.edu (Bonnie John) Date: Tue, 24 Aug 2010 13:18:03 -0400 Subject: [ACT-R-users] Model of writing In-Reply-To: References: Message-ID: <4C73FECB.9080100@cs.cmu.edu> We have a lower-level model of typing implemented in ACT-R tht is "under-the-hood" of CogTool. It is a mixture of my ages-old PhD thesis and what we cold do in ACT-R without changing the entire structure of its hand and fingers. So it still has remnants of ACT-R's typing assumptions, like the the hand always goes back to the home-row between each keystroke, but we have relaxed some of the other assumptions in the standard ACT-R typing model and so have sped it up to being about a 40 wpm typist instead of the 20 wpm typist it is in the general release. It was always on our ToDo list to make this code available to the ACT-R community similar to how EMMA is available, but I guess that has fallen through a crack. But we would be happy to share what we have. Don (Morrison), can you please respond with an explanation of how CogTool produces ACT-R code that types as fast as it does and how that ACT-R code does what it does? Thanks, Bonnie Michael Carl wrote: > Hello, > > I have writing data (from keystroke logging) and there seems to be some > regularities across different writers, which I'd like to simulate in a > (low-level) model of writing. The ACT-R tutorial in unit2 describes a typing > event to take more than 500ms but our data has also instances of inter-keystroke > delay of 50ms (or even less). The tutorial says that this "press-key request > obviously does not model the typing skills of an expert typist", true, but do > you know whether there is any such attempt to model low-level typing behaviour > (in ACT-R), similar maybe to the reading model? > > Michael > _______________________________________________ > ACT-R-users mailing list > ACT-R-users at act-r.psy.cmu.edu > http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users > > From db30 at andrew.cmu.edu Tue Aug 24 15:29:20 2010 From: db30 at andrew.cmu.edu (db30 at andrew.cmu.edu) Date: Tue, 24 Aug 2010 15:29:20 -0400 Subject: [ACT-R-users] Model of writing In-Reply-To: <4C73FECB.9080100@cs.cmu.edu> References: <4C73FECB.9080100@cs.cmu.edu> Message-ID: --On Tuesday, August 24, 2010 1:18 PM -0400 Bonnie John wrote: > We have a lower-level model of typing implemented in ACT-R tht is > "under-the-hood" of CogTool. It is a mixture of my ages-old PhD thesis > and what we cold do in ACT-R without changing the entire structure of > its hand and fingers. So it still has remnants of ACT-R's typing > assumptions, like the the hand always goes back to the home-row between > each keystroke, but we have relaxed some of the other assumptions in the > standard ACT-R typing model and so have sped it up to being about a 40 > wpm typist instead of the 20 wpm typist it is in the general release. > One note to make about that is the only assumptions about fingers returning to the home-row are with the use of the press-key and peck-recoil actions. If one programs the specific finger movements with peck and punch actions then the fingers will stay at the key that was hit. If you're not already taking advantage of that you may be able to speed up your CogTool typist even further. Of course the complication is that to do that you would also have to have something that computes the necessary geometry from the current finger position to the target key instead of just the home-row to target key geometries which are available from press-key. As a simple demonstration of that, attached is a simple model which types two keys in sequence using the same finger twice. The first time using two press-key actions and the second using explicit peck actions. The inter-key time for the second pair is less than for the first and that should be true for all valid one-finger pairs. Dan -------------- next part -------------- A non-text attachment was scrubbed... Name: peck-test.lisp Type: application/octet-stream Size: 3812 bytes Desc: not available URL: From susan.chipman at gmail.com Tue Aug 24 17:41:21 2010 From: susan.chipman at gmail.com (Susan Chipman) Date: Tue, 24 Aug 2010 17:41:21 -0400 Subject: [ACT-R-users] Model of writing In-Reply-To: References: <4C73FECB.9080100@cs.cmu.edu> Message-ID: I thought I might remind people that typing was one of the first behaviors modeled by the PDP folks. They had data showing that multiple typing actions went on in parallel -- that is, the actions of future fingers were beginning before the current action was completed. Don't know if these ACT-R models are dealing with that. Susan Chipman On Tue, Aug 24, 2010 at 3:29 PM, wrote: > > > --On Tuesday, August 24, 2010 1:18 PM -0400 Bonnie John > wrote: > > We have a lower-level model of typing implemented in ACT-R tht is >> "under-the-hood" of CogTool. It is a mixture of my ages-old PhD thesis >> and what we cold do in ACT-R without changing the entire structure of >> its hand and fingers. So it still has remnants of ACT-R's typing >> assumptions, like the the hand always goes back to the home-row between >> each keystroke, but we have relaxed some of the other assumptions in the >> standard ACT-R typing model and so have sped it up to being about a 40 >> wpm typist instead of the 20 wpm typist it is in the general release. >> >> > One note to make about that is the only assumptions about fingers returning > to the home-row are with the use of the press-key and peck-recoil actions. > If one programs the specific finger movements with peck and punch actions > then the fingers will stay at the key that was hit. If you're not already > taking advantage of that you may be able to speed up your CogTool typist > even further. Of course the complication is that to do that you would > also have to have something that computes the necessary geometry from the > current finger position to the target key instead of just the home-row to > target key geometries which are available from press-key. > > As a simple demonstration of that, attached is a simple model which types > two keys in sequence using the same finger twice. The first time using > two press-key actions and the second using explicit peck actions. The > inter-key time for the second pair is less than for the first and that > should be true for all valid one-finger pairs. > > Dan > _______________________________________________ > ACT-R-users mailing list > ACT-R-users at act-r.psy.cmu.edu > http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bej at cs.cmu.edu Tue Aug 24 17:57:44 2010 From: bej at cs.cmu.edu (Bonnie John) Date: Tue, 24 Aug 2010 17:57:44 -0400 Subject: [ACT-R-users] Model of writing In-Reply-To: References: <4C73FECB.9080100@cs.cmu.edu> Message-ID: <4C744058.80902@cs.cmu.edu> Susan, what you say is certainly true, but adding my own bit of history, part of my thesis compared my serial approximation to the Rumelhart and Norman PDP model and I matched the inter-keystroke time for digraphs better than they did. My serial model had an average absolute percent error of 0.9% and theirs was 5.7%. That part of my thesis used a model of each finger's position and a very simple approximation of where all the fingers on the hand move to when one finger hits a key. Then it just used Fitts Law for the horizontal movement to the next key. This is at a lower level than what we are currently doing in CogTool/ACT-R, but it could be done in ACT-R if we programmed the fingers to know where they are in space and move according to the algorithm in my thesis. Anybody want to implement that? Bonnie Reference for the serial typing algorithm: John, B. E. (1996) TYPIST: A Theory of Performance In Skilled Typing. Human-Computer Interaction , 11 (4), pp.321-355. Figure 16 and 17. The PDP model I compared to was in Rumelhart, D. E. & Noman, D. A (1982) Simulating a Skilled Typist: A Study of Skilled Cognitive-Motor Performance. Cognitive Science, Volume 6, Issue 1, pages 1?36. Susan Chipman wrote: > I thought I might remind people that typing was one of the first > behaviors modeled by the PDP folks. They had data showing that > multiple typing actions went on in parallel -- that is, the actions of > future fingers were beginning before the current action was completed. > Don't know if these ACT-R models are dealing with that. > > Susan Chipman > > On Tue, Aug 24, 2010 at 3:29 PM, > wrote: > > > > --On Tuesday, August 24, 2010 1:18 PM -0400 Bonnie John > > wrote: > > We have a lower-level model of typing implemented in ACT-R tht is > "under-the-hood" of CogTool. It is a mixture of my ages-old > PhD thesis > and what we cold do in ACT-R without changing the entire > structure of > its hand and fingers. So it still has remnants of ACT-R's typing > assumptions, like the the hand always goes back to the > home-row between > each keystroke, but we have relaxed some of the other > assumptions in the > standard ACT-R typing model and so have sped it up to being > about a 40 > wpm typist instead of the 20 wpm typist it is in the general > release. > > > One note to make about that is the only assumptions about fingers > returning > to the home-row are with the use of the press-key and peck-recoil > actions. > If one programs the specific finger movements with peck and punch > actions > then the fingers will stay at the key that was hit. If you're not > already > taking advantage of that you may be able to speed up your CogTool > typist > even further. Of course the complication is that to do that you would > also have to have something that computes the necessary geometry > from the > current finger position to the target key instead of just the > home-row to > target key geometries which are available from press-key. > > As a simple demonstration of that, attached is a simple model > which types > two keys in sequence using the same finger twice. The first time using > two press-key actions and the second using explicit peck actions. The > inter-key time for the second pair is less than for the first and that > should be true for all valid one-finger pairs. > > Dan > _______________________________________________ > ACT-R-users mailing list > ACT-R-users at act-r.psy.cmu.edu > http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users > > > ------------------------------------------------------------------------ > > _______________________________________________ > ACT-R-users mailing list > ACT-R-users at act-r.psy.cmu.edu > http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users > From bej at cs.cmu.edu Tue Aug 24 18:01:41 2010 From: bej at cs.cmu.edu (Bonnie John) Date: Tue, 24 Aug 2010 18:01:41 -0400 Subject: [ACT-R-users] Model of writing - PS In-Reply-To: References: <4C73FECB.9080100@cs.cmu.edu> Message-ID: <4C744145.1040009@cs.cmu.edu> Oh, yeah, more history: when my serial approximation was submitted to a psychology journal, one of the reviewers called it "evil engineering" because it perpetuated the myth that typing could be approximated with a serial model even though videos of people typing clearly show that the fingers move in parallel. Of course, all models are approximations and why one is "evil" and one is, what, "angelic"???, I dunno. ;-) It got published in that "evil" journal, HCI. Bonnie Susan Chipman wrote: > I thought I might remind people that typing was one of the > first behaviors modeled by the PDP folks. They had data showing that > multiple typing actions went on in parallel -- that is, the actions of > future fingers were beginning before the current action was > completed. Don't know if these ACT-R models are dealing with that. > > Susan Chipman > > On Tue, Aug 24, 2010 at 3:29 PM, > wrote: > > > > --On Tuesday, August 24, 2010 1:18 PM -0400 Bonnie John > > wrote: > > We have a lower-level model of typing implemented in ACT-R tht is > "under-the-hood" of CogTool. It is a mixture of my ages-old > PhD thesis > and what we cold do in ACT-R without changing the entire > structure of > its hand and fingers. So it still has remnants of ACT-R's typing > assumptions, like the the hand always goes back to the > home-row between > each keystroke, but we have relaxed some of the other > assumptions in the > standard ACT-R typing model and so have sped it up to being > about a 40 > wpm typist instead of the 20 wpm typist it is in the general > release. > > > One note to make about that is the only assumptions about fingers > returning > to the home-row are with the use of the press-key and peck-recoil > actions. > If one programs the specific finger movements with peck and punch > actions > then the fingers will stay at the key that was hit. If you're not > already > taking advantage of that you may be able to speed up your CogTool > typist > even further. Of course the complication is that to do that you would > also have to have something that computes the necessary geometry > from the > current finger position to the target key instead of just the > home-row to > target key geometries which are available from press-key. > > As a simple demonstration of that, attached is a simple model > which types > two keys in sequence using the same finger twice. The first time > using > two press-key actions and the second using explicit peck actions. The > inter-key time for the second pair is less than for the first and that > should be true for all valid one-finger pairs. > > Dan > _______________________________________________ > ACT-R-users mailing list > ACT-R-users at act-r.psy.cmu.edu > http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users > > > ------------------------------------------------------------------------ > > _______________________________________________ > ACT-R-users mailing list > ACT-R-users at act-r.psy.cmu.edu > http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users > From mc.isv at cbs.dk Wed Aug 25 05:23:04 2010 From: mc.isv at cbs.dk (Michael Carl) Date: Wed, 25 Aug 2010 11:23:04 +0200 Subject: [ACT-R-users] Model of writing In-Reply-To: <4C744058.80902@cs.cmu.edu> References: <4C73FECB.9080100@cs.cmu.edu> <4C744058.80902@cs.cmu.edu> Message-ID: thanks very much for the program and the literature pointer (I will go through it)! I have run Dan's peck-test lisp model. The ACT-R press-key action in the lisp code produces "rt" in 900ms and the peck actions for the same string takes 700ms. Comparing this to our (Danish) data: there are two peaks in the duration distribution for "rt": one around 75ms and another one around 180ms (there are more instances at the second peak), while for instance the most frequent bigram "er" (on the keyboard just to the left of "rt", and more likely typed with two fingers) has most occurrences around 60-90ms. A frequent trigrams like "ing" (even in Danish:-) is most of the time produced in 230-250ms and we also have some 6-grams produced in less than 600ms (always for n-1 inter-keystroke intervals). In principle, if two or more fingers are involved there could be any time lapse between the keystrokes, zero and even negative when the keys occur in the reverse order e.g. frequent typo hte (but maybe you would model that differently). If you splash both hands on the keyboard then almost all keys could be produced simultaneously - all of which shows that typing actions go on in parallel as Susan mentions. There does not seems to be an easy way to simulate something like this in ACT-R, even if we have the distributions and given also that there is no indication which fingers were used for a particular typing event in our data. Any suggestions? Michael 2010/8/24 Bonnie John : > Susan, what you say is certainly true, but adding my own bit of history, > part of my thesis compared my serial approximation to the Rumelhart and > Norman PDP model and I matched the inter-keystroke time for digraphs > better than they did. My serial model had an average absolute percent > error of 0.9% and theirs was 5.7%. > > That part of my thesis used a model of each finger's position and a very > simple approximation of where all the fingers on the hand move to when > one finger hits a key. Then it just used Fitts Law for the horizontal > movement to the next key. This is at a lower level than what we are > currently doing in CogTool/ACT-R, but it could be done in ACT-R if we > programmed the fingers to know where they are in space and move > according to the algorithm in my thesis. Anybody want to implement that? > > Bonnie > > Reference for the serial typing algorithm: > John, B. E. (1996) TYPIST: A Theory of Performance In Skilled Typing. > Human-Computer Interaction , 11 (4), pp.321-355. > Figure 16 and 17. > The PDP model I compared to was in > Rumelhart, D. E. & Noman, D. A (1982) Simulating a Skilled Typist: A > Study of Skilled Cognitive-Motor Performance. Cognitive Science, Volume > 6, Issue 1, pages 1?36. > > > Susan Chipman wrote: >> I thought I might remind people that typing was one of the first >> behaviors modeled by the PDP folks. They had data showing that >> multiple typing actions went on in parallel -- that is, the actions of >> future fingers were beginning before the current action was completed. >> Don't know if these ACT-R models are dealing with that. >> >> Susan Chipman >> >> On Tue, Aug 24, 2010 at 3:29 PM, > > wrote: >> >> >> >> ? ? --On Tuesday, August 24, 2010 1:18 PM -0400 Bonnie John >> ? ? > wrote: >> >> ? ? ? ? We have a lower-level model of typing implemented in ACT-R tht is >> ? ? ? ? "under-the-hood" of CogTool. It is a mixture of my ages-old >> ? ? ? ? PhD thesis >> ? ? ? ? and what we cold do in ACT-R without changing the entire >> ? ? ? ? structure of >> ? ? ? ? its hand and fingers. So it still has remnants of ACT-R's typing >> ? ? ? ? assumptions, like the the hand always goes back to the >> ? ? ? ? home-row between >> ? ? ? ? each keystroke, but we have relaxed some of the other >> ? ? ? ? assumptions in the >> ? ? ? ? standard ACT-R typing model and so have sped it up to being >> ? ? ? ? about a 40 >> ? ? ? ? wpm typist instead of the 20 wpm typist it is in the general >> ? ? ? ? release. >> >> >> ? ? One note to make about that is the only assumptions about fingers >> ? ? returning >> ? ? to the home-row are with the use of the press-key and peck-recoil >> ? ? actions. >> ? ? If one programs the specific finger movements with peck and punch >> ? ? actions >> ? ? then the fingers will stay at the key that was hit. If you're not >> ? ? already >> ? ? taking advantage of that you may be able to speed up your CogTool >> ? ? typist >> ? ? even further. Of course the complication is that to do that you would >> ? ? also have to have something that computes the necessary geometry >> ? ? from the >> ? ? current finger position to the target key instead of just the >> ? ? home-row to >> ? ? target key geometries which are available from press-key. >> >> ? ? As a simple demonstration of that, attached is a simple model >> ? ? which types >> ? ? two keys in sequence using the same finger twice. The first time using >> ? ? two press-key actions and the second using explicit peck actions. The >> ? ? inter-key time for the second pair is less than for the first and that >> ? ? should be true for all valid one-finger pairs. >> >> ? ? Dan >> ? ? _______________________________________________ >> ? ? ACT-R-users mailing list >> ? ? ACT-R-users at act-r.psy.cmu.edu >> ? ? http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users >> >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> ACT-R-users mailing list >> ACT-R-users at act-r.psy.cmu.edu >> http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users >> > > _______________________________________________ > ACT-R-users mailing list > ACT-R-users at act-r.psy.cmu.edu > http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users > > > > > From db30 at andrew.cmu.edu Wed Aug 25 10:02:59 2010 From: db30 at andrew.cmu.edu (db30 at andrew.cmu.edu) Date: Wed, 25 Aug 2010 10:02:59 -0400 Subject: [ACT-R-users] Model of writing In-Reply-To: References: <4C73FECB.9080100@cs.cmu.edu> <4C744058.80902@cs.cmu.edu> Message-ID: My message was really only intended as a suggestion to Bonnie as a way that they may be able to speed up the typing extension which they've implemented for CogTool given the assumptions that she listed. I don't think on its own the peck action would be sufficient for your work, and I don't know enough about the typing literature to offer any advise as to how best to model it. Given that you did run that model however, I want to make sure that you are comparing the right times from the model to your data. So, first let me make sure I understand what you're measuring. Are the 75ms and 180ms times for typing "rt" that you list the time between the "r" and the "t" hits or the total time from when the person is instructed to type such a sequence? I assume that it's the first of those, in which case the corresponding times from the two different mechanisms in the model run would be 300ms and 200ms. Those are the differences between the times of the output-key actions which indicate when the keys are actually hit by the model. If you wanted to measure the second of those (total response time until the second press) then the times would be 750ms and 650ms because that is the time from the first production in the sequence to the striking of the second key. Dan --On Wednesday, August 25, 2010 11:23 AM +0200 Michael Carl wrote: > thanks very much for the program and the literature pointer (I will go > through it)! I have run Dan's peck-test lisp model. The ACT-R press-key > action in the lisp code > produces "rt" in 900ms and the peck actions for the same string takes > 700ms. Comparing this to our (Danish) data: there are two peaks in the > duration distribution for "rt": one around 75ms and another one around > 180ms (there are more instances at the second peak), while for instance > the most frequent bigram "er" (on the keyboard just to the left of "rt", > and more likely typed with two fingers) has most occurrences around > 60-90ms. A frequent trigrams like "ing" (even in Danish:-) is most of the > time produced in 230-250ms and we also have some 6-grams produced in less > than 600ms (always for n-1 inter-keystroke intervals). > > In principle, if two or more fingers are involved there could be any time > lapse between the keystrokes, zero and even negative when the keys occur > in the reverse order e.g. frequent typo hte (but maybe you would model > that differently). If you splash both hands on the keyboard then almost > all keys could be produced simultaneously - all of which shows that > typing actions go on in parallel as Susan mentions. > > There does not seems to be an easy way to simulate something like this in > ACT-R, even if we have the distributions and given also that there is no > indication which fingers were used for a particular typing event in our > data. > > Any suggestions? > > Michael > > > 2010/8/24 Bonnie John : >> Susan, what you say is certainly true, but adding my own bit of history, >> part of my thesis compared my serial approximation to the Rumelhart and >> Norman PDP model and I matched the inter-keystroke time for digraphs >> better than they did. My serial model had an average absolute percent >> error of 0.9% and theirs was 5.7%. >> >> That part of my thesis used a model of each finger's position and a very >> simple approximation of where all the fingers on the hand move to when >> one finger hits a key. Then it just used Fitts Law for the horizontal >> movement to the next key. This is at a lower level than what we are >> currently doing in CogTool/ACT-R, but it could be done in ACT-R if we >> programmed the fingers to know where they are in space and move >> according to the algorithm in my thesis. Anybody want to implement that? >> >> Bonnie >> >> Reference for the serial typing algorithm: >> John, B. E. (1996) TYPIST: A Theory of Performance In Skilled Typing. >> Human-Computer Interaction , 11 (4), pp.321-355. >> Figure 16 and 17. >> The PDP model I compared to was in >> Rumelhart, D. E. & Noman, D. A (1982) Simulating a Skilled Typist: A >> Study of Skilled Cognitive-Motor Performance. Cognitive Science, Volume >> 6, Issue 1, pages 1?36. >> >> >> Susan Chipman wrote: >>> I thought I might remind people that typing was one of the first >>> behaviors modeled by the PDP folks. They had data showing that >>> multiple typing actions went on in parallel -- that is, the actions of >>> future fingers were beginning before the current action was completed. >>> Don't know if these ACT-R models are dealing with that. >>> >>> Susan Chipman >>> >>> On Tue, Aug 24, 2010 at 3:29 PM, >> > wrote: >>> >>> >>> >>> ? ? --On Tuesday, August 24, 2010 1:18 PM -0400 Bonnie John >>> ? ? > wrote: >>> >>> ? ? ? ? We have a lower-level model of typing implemented in ACT-R tht >>> is ? ? ? ? "under-the-hood" of CogTool. It is a mixture of my ages-old >>> ? ? ? ? PhD thesis >>> ? ? ? ? and what we cold do in ACT-R without changing the entire >>> ? ? ? ? structure of >>> ? ? ? ? its hand and fingers. So it still has remnants of ACT-R's typing >>> ? ? ? ? assumptions, like the the hand always goes back to the >>> ? ? ? ? home-row between >>> ? ? ? ? each keystroke, but we have relaxed some of the other >>> ? ? ? ? assumptions in the >>> ? ? ? ? standard ACT-R typing model and so have sped it up to being >>> ? ? ? ? about a 40 >>> ? ? ? ? wpm typist instead of the 20 wpm typist it is in the general >>> ? ? ? ? release. >>> >>> >>> ? ? One note to make about that is the only assumptions about fingers >>> ? ? returning >>> ? ? to the home-row are with the use of the press-key and peck-recoil >>> ? ? actions. >>> ? ? If one programs the specific finger movements with peck and punch >>> ? ? actions >>> ? ? then the fingers will stay at the key that was hit. If you're not >>> ? ? already >>> ? ? taking advantage of that you may be able to speed up your CogTool >>> ? ? typist >>> ? ? even further. Of course the complication is that to do that you >>> would ? ? also have to have something that computes the necessary >>> geometry ? ? from the >>> ? ? current finger position to the target key instead of just the >>> ? ? home-row to >>> ? ? target key geometries which are available from press-key. >>> >>> ? ? As a simple demonstration of that, attached is a simple model >>> ? ? which types >>> ? ? two keys in sequence using the same finger twice. The first time >>> using ? ? two press-key actions and the second using explicit peck >>> actions. The ? ? inter-key time for the second pair is less than for >>> the first and that ? ? should be true for all valid one-finger pairs. >>> >>> ? ? Dan >>> ? ? _______________________________________________ >>> ? ? ACT-R-users mailing list >>> ? ? ACT-R-users at act-r.psy.cmu.edu >>> ? ? http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users >>> >>> >>> ------------------------------------------------------------------------ >>> >>> _______________________________________________ >>> ACT-R-users mailing list >>> ACT-R-users at act-r.psy.cmu.edu >>> http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users >>> >> >> _______________________________________________ >> ACT-R-users mailing list >> ACT-R-users at act-r.psy.cmu.edu >> http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users >> >> >> >> >> > From mc.isv at cbs.dk Wed Aug 25 12:02:28 2010 From: mc.isv at cbs.dk (Michael Carl) Date: Wed, 25 Aug 2010 18:02:28 +0200 Subject: [ACT-R-users] Model of writing In-Reply-To: References: <4C73FECB.9080100@cs.cmu.edu> <4C744058.80902@cs.cmu.edu> Message-ID: Thank's Dan! yes, you are right one would have to take into account the hands-to-home overhead so that the difference of the data and the ACT-R model becomes only 75/180ms to 200ms (for the "rt" example), which sounds less serious. Nevertheless, I have also tried to use different fingers in the lisp code which would type the same sequence, but the 200ms seems to be hard wired. Does that mean that currently in ACT-R a one-finger typist has the same performance than a two finger typist than a 10 finger typist? I just read on page 293 of the ACT-R manual (section Isa press-key) that you describe a theory and a learning model for typing, maybe this might resolve those problems? Michael 2010/8/25 : > > My message was really only intended as a suggestion to Bonnie as a way > that they may be able to speed up the typing extension which they've > implemented for CogTool given the assumptions that she listed. ?I don't > think on its own the peck action would be sufficient for your work, and > I don't know enough about the typing literature to offer any advise as > to how best to model it. > > Given that you did run that model however, I want to make sure that > you are comparing the right times from the model to your data. ?So, > first let me make sure I understand what you're measuring. ?Are the > 75ms and 180ms times for typing "rt" that you list the time between > the "r" and the "t" hits or the total time from when the person is > instructed to type such a sequence? ?I assume that it's the first of > those, in which case the corresponding times from the two different > mechanisms in the model run would be 300ms and 200ms. ?Those are the > differences between the times of the output-key actions which indicate > when the keys are actually hit by the model. ?If you wanted to measure > the second of those (total response time until the second press) then > the times would be 750ms and 650ms because that is the time from the > first production in the sequence to the striking of the second key. > > Dan > > --On Wednesday, August 25, 2010 11:23 AM +0200 Michael Carl > wrote: > >> thanks very much for the program and the literature pointer (I will go >> through it)! I have run Dan's peck-test lisp model. The ACT-R press-key >> action in the lisp code >> produces "rt" in 900ms and the peck actions for the same string takes >> 700ms. Comparing this to our (Danish) data: there are two peaks in the >> duration distribution for "rt": one around 75ms and another one around >> 180ms (there are more instances at the second peak), while for instance >> the most frequent bigram "er" (on the keyboard just to the left of "rt", >> and more likely typed with two fingers) has most occurrences around >> 60-90ms. A frequent trigrams like "ing" (even in Danish:-) is most of the >> time produced in 230-250ms and we also have some 6-grams produced in less >> than 600ms (always for n-1 inter-keystroke intervals). >> >> In principle, if two or more fingers are involved there could be any time >> lapse between the keystrokes, zero and even negative when the keys occur >> in the reverse order e.g. frequent typo hte (but maybe you would model >> that differently). If you splash both hands on the keyboard then almost >> all keys could be produced simultaneously - all of which shows that >> typing actions go on in parallel as Susan mentions. >> >> There does not seems to be an easy way to simulate something like this in >> ACT-R, even if we have the distributions and given also that there is no >> indication which fingers were used for a particular typing event in our >> data. >> >> Any suggestions? >> >> Michael >> >> >> 2010/8/24 Bonnie John : >>> Susan, what you say is certainly true, but adding my own bit of history, >>> part of my thesis compared my serial approximation to the Rumelhart and >>> Norman PDP model and I matched the inter-keystroke time for digraphs >>> better than they did. My serial model had an average absolute percent >>> error of 0.9% and theirs was 5.7%. >>> >>> That part of my thesis used a model of each finger's position and a very >>> simple approximation of where all the fingers on the hand move to when >>> one finger hits a key. Then it just used Fitts Law for the horizontal >>> movement to the next key. This is at a lower level than what we are >>> currently doing in CogTool/ACT-R, but it could be done in ACT-R if we >>> programmed the fingers to know where they are in space and move >>> according to the algorithm in my thesis. Anybody want to implement that? >>> >>> Bonnie >>> >>> Reference for the serial typing algorithm: >>> John, B. E. (1996) TYPIST: A Theory of Performance In Skilled Typing. >>> Human-Computer Interaction , 11 (4), pp.321-355. >>> Figure 16 and 17. >>> The PDP model I compared to was in >>> Rumelhart, D. E. & Noman, D. A (1982) Simulating a Skilled Typist: A >>> Study of Skilled Cognitive-Motor Performance. Cognitive Science, Volume >>> 6, Issue 1, pages 1?36. >>> >>> >>> Susan Chipman wrote: >>>> I thought I might remind people that typing was one of the first >>>> behaviors modeled by the PDP folks. They had data showing that >>>> multiple typing actions went on in parallel -- that is, the actions of >>>> future fingers were beginning before the current action was completed. >>>> Don't know if these ACT-R models are dealing with that. >>>> >>>> Susan Chipman >>>> >>>> On Tue, Aug 24, 2010 at 3:29 PM, >>> > wrote: >>>> >>>> >>>> >>>> ? ? --On Tuesday, August 24, 2010 1:18 PM -0400 Bonnie John >>>> ? ? > wrote: >>>> >>>> ? ? ? ? We have a lower-level model of typing implemented in ACT-R tht >>>> is ? ? ? ? "under-the-hood" of CogTool. It is a mixture of my ages-old >>>> ? ? ? ? PhD thesis >>>> ? ? ? ? and what we cold do in ACT-R without changing the entire >>>> ? ? ? ? structure of >>>> ? ? ? ? its hand and fingers. So it still has remnants of ACT-R's typing >>>> ? ? ? ? assumptions, like the the hand always goes back to the >>>> ? ? ? ? home-row between >>>> ? ? ? ? each keystroke, but we have relaxed some of the other >>>> ? ? ? ? assumptions in the >>>> ? ? ? ? standard ACT-R typing model and so have sped it up to being >>>> ? ? ? ? about a 40 >>>> ? ? ? ? wpm typist instead of the 20 wpm typist it is in the general >>>> ? ? ? ? release. >>>> >>>> >>>> ? ? One note to make about that is the only assumptions about fingers >>>> ? ? returning >>>> ? ? to the home-row are with the use of the press-key and peck-recoil >>>> ? ? actions. >>>> ? ? If one programs the specific finger movements with peck and punch >>>> ? ? actions >>>> ? ? then the fingers will stay at the key that was hit. If you're not >>>> ? ? already >>>> ? ? taking advantage of that you may be able to speed up your CogTool >>>> ? ? typist >>>> ? ? even further. Of course the complication is that to do that you >>>> would ? ? also have to have something that computes the necessary >>>> geometry ? ? from the >>>> ? ? current finger position to the target key instead of just the >>>> ? ? home-row to >>>> ? ? target key geometries which are available from press-key. >>>> >>>> ? ? As a simple demonstration of that, attached is a simple model >>>> ? ? which types >>>> ? ? two keys in sequence using the same finger twice. The first time >>>> using ? ? two press-key actions and the second using explicit peck >>>> actions. The ? ? inter-key time for the second pair is less than for >>>> the first and that ? ? should be true for all valid one-finger pairs. >>>> >>>> ? ? Dan >>>> ? ? _______________________________________________ >>>> ? ? ACT-R-users mailing list >>>> ? ? ACT-R-users at act-r.psy.cmu.edu >>>> ? ? http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users >>>> >>>> >>>> ------------------------------------------------------------------------ >>>> >>>> _______________________________________________ >>>> ACT-R-users mailing list >>>> ACT-R-users at act-r.psy.cmu.edu >>>> http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users >>>> >>> >>> _______________________________________________ >>> ACT-R-users mailing list >>> ACT-R-users at act-r.psy.cmu.edu >>> http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users >>> >>> >>> >>> >>> >> > > > > > > _______________________________________________ > ACT-R-users mailing list > ACT-R-users at act-r.psy.cmu.edu > http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users > > > > > From db30 at andrew.cmu.edu Wed Aug 25 14:32:00 2010 From: db30 at andrew.cmu.edu (db30 at andrew.cmu.edu) Date: Wed, 25 Aug 2010 14:32:00 -0400 Subject: [ACT-R-users] Model of writing In-Reply-To: References: <4C73FECB.9080100@cs.cmu.edu> <4C744058.80902@cs.cmu.edu> Message-ID: <8BC02DA7AB0A8605C3FFF250@act-r6.psy.cmu.edu> --On Wednesday, August 25, 2010 6:02 PM +0200 Michael Carl wrote: > Thank's Dan! > yes, you are right one would have to take into account the hands-to-home > overhead so that the difference of the data and the ACT-R model becomes > only 75/180ms to 200ms (for the "rt" example), which sounds less serious. That initial time isn't hands-to-home overhead because the fingers start on the home row. It's the preparation and execution of that first keypress itself -- 450ms to hit the "r" from a cold start. In the press-key example it also takes 450ms from the start of the production which requests the "t" be hit until that key is struck. However those times are overlapping because the motor module is allowed to prepare one action while executing the previous, and in this case that resulted in an inter-key time of only 300ms. > Nevertheless, I have also tried to use different fingers in the lisp code > which would type the same sequence, but the 200ms seems to be hard wired. > Does that mean that currently in ACT-R a one-finger typist has the same > performance than a > two finger typist than a 10 finger typist? > 200ms would be the lower bound for successive peck actions with the default motor module parameters, but it's not a fixed value because the time for the motor action includes a Fitts's law calculation for the distance the finger has to move. Thus if the model were to only use one finger it would have higher execution costs because of the increased travel distance vs a model which used multiple fingers traveling shorter distances. Interestingly however, if the model were to continually peck with that one finger (instead of pointing-at and punching) the one finger model would have lower preparation costs since it'd always be the same action. I think the added execution cost would dominate that preparation savings on average, but I'll actually have to run a test now because that's an interesting question. > I just read on page 293 of the ACT-R manual (section Isa press-key) that > you describe a theory and a learning model for typing, maybe this might > resolve those problems? > If it were possible to write the model which did what's described there (essentially learn how to do press-key instead of have it as a built in construct) then that might indeed be useful to you. However, a critical piece of that hypothetical model is something which ACT-R's motor module currently isn't able to do -- learn. Dan From db30 at andrew.cmu.edu Wed Aug 25 22:33:43 2010 From: db30 at andrew.cmu.edu (Dan Bothell) Date: Wed, 25 Aug 2010 22:33:43 -0400 Subject: [ACT-R-users] Model of writing Message-ID: <202E08D6A673F8FBEE007CED@[192.168.1.10]> To test the question about 1 fingered, 2 fingered, and 10 fingered typists in ACT-R I created some test models (if you could even call them that because they're mostly just Lisp code) which just push motor requests through to type out sentences repeatedly for 60 seconds to get a words/minute score (where a word is every 5 keypresses). Those models were then tested across the three possibilities for pipelining of motor actions: "state free", "processor free", and "preparation free". There were 5 total models: One-finger is a good "hunt and peck" typist using only one finger. Two-fingers is a good "hunt and peck" typist using both index fingers keeping each hand on its own side of the keyboard. Ten-fingers is a model which uses the default press-key action to touch-type using all fingers. One-finger-savant is a perfect touch-typist using only one index finger i.e. it can move that finger from any key to hit any other key perfectly as a single action, without looking. Two-finger-savant is a perfect touch-typist using both index fingers where each finger stays on its own side of the keyboard. Here's the average WPM I got based on 3 simple sentences which each have all the letters of the alphabet at least once: state free processor free preparation free one-finger 13.0 19.1 X two-fingers 13.8 20.7 X ten-fingers 25.3 40.9 47.5 one-finger-savant 30.5 44.6 X two-finger-savant 28.3 44.1 X The code is attached if anyone wants to look at the individual sentence results (the function run-all-tests will run the models through all the conditions), but I wouldn't recommend it as a guide for how to write an ACT-R model. :) Here are the things which I found interesting. - The fastest overall was the ten fingered model in the "preparation free" case at 47.5 wpm, which is faster than I expected. - Testing "preparation free" actually lead to typing errors for the one- and two-fingered models since it was modifying the features before the last action had begun (the finger was trying to do two things at once). So, those models are skipped for that condition. - In the other cases the ten fingered model beats the "hunt and peck" models as expected, but the "savant" models were faster than the ten fingered one. So the savings in preparation time is better than the cost of the extra movement relative to the press-key actions with the default motor module parameters. However, from a plausibility standpoint what those savant models do seems pretty super human to me. Dan -------------- next part -------------- A non-text attachment was scrubbed... Name: wpm-tests.lisp Type: text/x-lisp-source Size: 11043 bytes Desc: not available URL: From dfm2 at cmu.edu Thu Aug 26 09:11:28 2010 From: dfm2 at cmu.edu (Don Morrison) Date: Thu, 26 Aug 2010 09:11:28 -0400 Subject: [ACT-R-users] Model of writing In-Reply-To: <4C73FECB.9080100@cs.cmu.edu> References: <4C73FECB.9080100@cs.cmu.edu> Message-ID: On Tue, Aug 24, 2010 at 1:18 PM, Bonnie John wrote: > Don (Morrison), can you please respond with an explanation of how CogTool > produces ACT-R code that types as fast as it does and how that ACT-R code > does what it does? First off, for those that may not be familiar with CogTool's use of ACT-R: For ordinary CogTool use, where we're modeling a skilled user's behavior, we use ACT-R in an unnaturally constrained fashion, where we're essentially telling it what to do at every step of way. We use a step counter to force a path through the various productions. Our Java code, by examining the interface we're interacting with and our path through it emits a lot of productions, though at any point in time during ACT-R's execution of these productions only one or two of them can ever possibly match the current goal. We also peg the random number generator seed at the beginning of a run to ensure reproducible behavior. For typing ordinary, running text we emit one production per character, plus one additional production per word of length longer than one. Because of the step counter, all of these productions must fire in order. Within the Java code emitting the productions we keep track of which hand is being used for each key, and structure the relevant productions slightly differently depending upon hand use. We also slightly augment the trace information that ACT-R writes to display the hand being used. The productions corresponding to characters perform a press-key on the appropriate key. They wait on motor preparation being free. In addition, if it is the start of a new word, or the previous key was pressed with the same hand, we also wait on motor execution being free. However, if we are within a single word, and using opposite hands, we allow the executions to overlap. When starting a new word we also emit another production, to account for the behavior Bonnie's research on typing indicated happens. This production essentially does nothing but advance the step counter and consume 50ms, while waiting on both motor preparation and execution to be free. -- Don Morrison "After all these years I have observed that beauty, like happiness, is frequent. A day does not pass when we are not, for an instant, in paradise." -- Jorge Luis Borges, _Los Conjurados_, tr Willis Barnstone From Jerry.Ball at mesa.afmc.af.mil Thu Aug 26 12:59:48 2010 From: Jerry.Ball at mesa.afmc.af.mil (Ball, Jerry T Civ USAF AFMC 711 HPW/RHAC) Date: Thu, 26 Aug 2010 12:59:48 -0400 Subject: [ACT-R-users] Language "Module" in ACT-R Message-ID: Fellow ACT-R modelers, A 2009 Cog Sci Conference paper referred to the language comprehension model we are developing in ACT-R as a language "module". This is not a view I have adopted, preferring to view our model as (primarily) using ACT-R general mechanisms. However, it is true that the model has a collection of specialized buffers to store the partial products of language comprehension. These buffers are motivated on functional grounds - they are needed to process language and generate representations. We do not propose a mapping to brain regions. We introduced the specialized buffers because we have not found a way of using ACT-R's DM retrieval mechanism and the single retrieval buffer which holds a single chunk, to support retention of the partial products that are needed for (large-scale) language comprehension. It may be that the existence of these specialized buffers, combined with the productions which reference them can be viewed as constituting a module in ACT-R. However, unlike other modules, this module contains lots of production-based grammatical knowledge that must be learned (although we have engineered them in). If, in addition, it is possible to learn how to buffer declarative knowledge via creation of specialized buffers, then there may be a cognitive mechanism for learning new "modules". That is, the brain can be specialized to process specific kinds of information that it is not hardwired to process. Under this view, a language "module" is not innate, but it can be learned. Of course, our model does not actually learn how to buffer specialized linguistic information (or learn productions for that matter), we have engineered in the new buffers (and productions). But if ACT-R had such a mechanism, then it might be possible for an ACT-R model to learn to be a "module". Jerry Jerry T. Ball, PhD Senior Research Psychologist Human Effectiveness Directorate 711th Human Performance Wing Air Force Research Laboratory 6030 S. Kent Street, Mesa, AZ 85212 PH: 480-988-6561 ext 678; DSN 474-6678 Jerry.Ball at mesa.afmc.af.mil -------------- next part -------------- An HTML attachment was scrubbed... URL: From pward at mtu.edu Fri Aug 27 15:16:53 2010 From: pward at mtu.edu (Paul Ward) Date: Fri, 27 Aug 2010 15:16:53 -0400 Subject: [ACT-R-users] Faculty Position Available - Assistant Professor Message-ID: Dear all, I would be very grateful if you could forward this faculty position advertisement to anyone who might be interested and/or post on your website/listserv. Many thanks in advance. Apologies for any dual postings. Very best regards, Paul Ward 2010 Faculty Search Advertisement The Department of Cognitive and Learning Sciences at Michigan Technological University seeks applicants for a tenure-track, Assistant Professor of Psychology to begin Fall 2011. The position supports our new Ph.D. program in Applied Cognitive Science and Human Factors. All areas of specialization considered, but candidates in human factors, applied experimental psychology, and/or advanced quantitative methods/statistics are of particular interest. Ph.D. in psychology or related discipline is required. Post-doctoral experience preferred. Current program strengths are in basic and applied psychology, human factors, and cognitive science, with an emphasis on research in expertise, cognitive engineering, judgment and decision-making. The ideal candidate will contribute to both basic and applied research, should attract external funding, and pursue interdisciplinary research collaborations with MTU faculty in psychology and affiliated programs. Typical teaching load is 2 (undergraduate and graduate) courses per semester. Michigan Tech, with 22 Ph.D. and 34 master?s programs, is a public mid-sized institution classified as a Research University with high research activity (RU/H). Michigan Tech is ranked in the top tier of national universities according to U.S. News & World Report?s ?America?s Best Colleges 2011? and received ?Best in the Midwest? honors in Princeton Review?s The Best 373 Colleges, 2011 Edition. Michigan Tech is located in the heart of Michigan?s Upper Peninsula and is rated as one of the Top 10 summer travel destinations, as well as one of the Top 10 outdoor adventure spots in the country for our bike trails, Olympic-caliber cross country ski trails, Lake Superior shoreline, and numerous inland lakes. Review of applications will begin November 1st. Candidates must send an electronic AND physical copy of their application materials, including a letter of application summarizing research and teaching goals, re(pre)prints, curriculum vita, and 3 letters of recommendation to Psychology Search Committee, 310 Chem Sci Bldg, 1400 Townsend Dr., Houghton, MI 49931-1295. Michigan Technological University is an equal opportunity educational institution/equal opportunity employer. -- Paul Ward, Ph.D. Associate Professor of Psychology, Graduate Program Director Department of Cognitive and Learning Sciences Michigan Technological University 1400 Townsend Dr., Houghton, MI 49931-1295. 906-487-2065 (ph), 906-487-1094 (f), pward at mtu.edu ACE lab web page: http://web.me.com/pw70/ACE_lab -------------- next part -------------- An HTML attachment was scrubbed... URL: From preber at northwestern.edu Mon Aug 30 11:38:29 2010 From: preber at northwestern.edu (Paul J. Reber) Date: Mon, 30 Aug 2010 10:38:29 -0500 Subject: [ACT-R-users] Model of writing In-Reply-To: <202E08D6A673F8FBEE007CED@[192.168.1.10]> References: <202E08D6A673F8FBEE007CED@[192.168.1.10]> Message-ID: <4C7BD075.3030306@northwestern.edu> This might be just slightly off the general writing/typing topic, but has anybody played around with an ACT-R model of something like playing Guitar Hero? We're using a task something like this in the lab (without music) to look at sequence learning and thinking about the general process of skill acquisition (in perceptual-motor tasks). The relation to typing would be why you might be quicker to type familiar words/phrases due to prior practice frequently typing them. Paul -- Paul J. Reber, Ph.D. Department of Psychology Northwestern University Dan Bothell wrote: > > To test the question about 1 fingered, 2 fingered, and 10 fingered > typists in ACT-R I created some test models (if you could even call > them that because they're mostly just Lisp code) which just push motor > requests through to type out sentences repeatedly for 60 seconds to > get a words/minute score (where a word is every 5 keypresses). Those > models were then tested across the three possibilities for pipelining > of motor actions: "state free", "processor free", and "preparation free". > > There were 5 total models: > > One-finger is a good "hunt and peck" typist using only one finger. > > Two-fingers is a good "hunt and peck" typist using both index fingers > keeping each hand on its own side of the keyboard. > > Ten-fingers is a model which uses the default press-key action to > touch-type using all fingers. > > One-finger-savant is a perfect touch-typist using only one index finger > i.e. it can move that finger from any key to hit any other key > perfectly as a single action, without looking. > > Two-finger-savant is a perfect touch-typist using both index fingers > where each finger stays on its own side of the keyboard. > > Here's the average WPM I got based on 3 simple sentences which > each have all the letters of the alphabet at least once: > > state free processor free preparation free > one-finger 13.0 19.1 X > two-fingers 13.8 20.7 X > ten-fingers 25.3 40.9 47.5 > one-finger-savant 30.5 44.6 X > two-finger-savant 28.3 44.1 X > > The code is attached if anyone wants to look at the individual > sentence results (the function run-all-tests will run the models > through all the conditions), but I wouldn't recommend it as a > guide for how to write an ACT-R model. :) > > Here are the things which I found interesting. > > - The fastest overall was the ten fingered model in the "preparation > free" case at 47.5 wpm, which is faster than I expected. > > - Testing "preparation free" actually lead to typing errors for the one- > and two-fingered models since it was modifying the features before the > last action had begun (the finger was trying to do two things at once). > So, those models are skipped for that condition. > > - In the other cases the ten fingered model beats the "hunt and peck" > models as expected, but the "savant" models were faster than the > ten fingered one. So the savings in preparation time is better than > the cost of the extra movement relative to the press-key actions > with the default motor module parameters. However, from a plausibility > standpoint what those savant models do seems pretty super human to me. > > > Dan > > > ------------------------------------------------------------------------ > > _______________________________________________ > ACT-R-users mailing list > ACT-R-users at act-r.psy.cmu.edu > http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users From hedderik at van-rijn.org Tue Aug 31 06:11:09 2010 From: hedderik at van-rijn.org (Hedderik van Rijn) Date: Tue, 31 Aug 2010 12:11:09 +0200 Subject: [ACT-R-users] Three (Fully Funded) PhD positions in Groningen Message-ID: <09D71911-0276-4B37-BEF5-FEE12A5A8AE8@rug.nl> Dear all, we are looking for applicants for three fully funded PhD positions related to computational cognitive modeling: 210273 PhD position: Keeping irrelevant information out of mind http://www.academictransfer.com/employer/RUG/vacancy/6023/lang/en/ 210247 Ph.D. position: Modeling the evolution of theory of mind http://www.academictransfer.com/employer/RUG/vacancy/5427/lang/en/ 210248 Ph.D. position: A cognitive system supporting intelligent interaction http://www.academictransfer.com/employer/RUG/vacancy/5428/lang/en/ As the direction of the projects will depend on the interests of the applicants, each of this projects can have a strong formal modeling component. If you know good candidates (or if you are one yourself), please bring the relevant advertisement(s) to their attention, and if you have any (informal) questions about these projects, I would be happy to answer them. - Hedderik. --- http://www.van-rijn.org From wfu at illinois.edu Tue Aug 31 10:26:34 2010 From: wfu at illinois.edu (Wai-Tat Fu) Date: Tue, 31 Aug 2010 09:26:34 -0500 Subject: [ACT-R-users] Model of writing In-Reply-To: <4C7BD075.3030306@northwestern.edu> References: <202E08D6A673F8FBEE007CED@[192.168.1.10]> <4C7BD075.3030306@northwestern.edu> Message-ID: <9E19458E-086C-4D7F-9B1F-2A26BFD921F3@illinois.edu> Hi, the first paper is a model of skill learning that shows the effects of practice, and second one is a reinforcement learning model on sequence learning. Hope they are useful. Fu, W.-T., Gonzalez, C, Healy, A., Kole, J., Bourne, L. (2006), Building predictive human performance models of skill acquisition in a data entry task. Proceedings of the 50th Annual Meeting of the Human Factors and Ergonomics Society (pp. 1122-1126). Santa Monica, CA: Human Factors and Ergonomics Society. Fu, W.-T., Anderson, J. R. (2006), From recurrent choice to skilled learning: A reinforcement learning model. Journal of Experimental Psychology: General, 135(2), 184-206. On Aug 30, 2010, at 10:38 AM, Paul J. Reber wrote: > This might be just slightly off the general writing/typing topic, but > has anybody played around with an ACT-R model of something like > playing > Guitar Hero? We're using a task something like this in the lab > (without > music) to look at sequence learning and thinking about the general > process of skill acquisition (in perceptual-motor tasks). > > The relation to typing would be why you might be quicker to type > familiar words/phrases due to prior practice frequently typing them. > > Paul > -- > Paul J. Reber, Ph.D. > Department of Psychology > Northwestern University > > Dan Bothell wrote: >> >> To test the question about 1 fingered, 2 fingered, and 10 fingered >> typists in ACT-R I created some test models (if you could even call >> them that because they're mostly just Lisp code) which just push >> motor >> requests through to type out sentences repeatedly for 60 seconds to >> get a words/minute score (where a word is every 5 keypresses). Those >> models were then tested across the three possibilities for pipelining >> of motor actions: "state free", "processor free", and "preparation >> free". >> >> There were 5 total models: >> >> One-finger is a good "hunt and peck" typist using only one finger. >> >> Two-fingers is a good "hunt and peck" typist using both index fingers >> keeping each hand on its own side of the keyboard. >> >> Ten-fingers is a model which uses the default press-key action to >> touch-type using all fingers. >> >> One-finger-savant is a perfect touch-typist using only one index >> finger >> i.e. it can move that finger from any key to hit any other key >> perfectly as a single action, without looking. >> >> Two-finger-savant is a perfect touch-typist using both index fingers >> where each finger stays on its own side of the keyboard. >> >> Here's the average WPM I got based on 3 simple sentences which >> each have all the letters of the alphabet at least once: >> >> state free processor free preparation free >> one-finger 13.0 19.1 X >> two-fingers 13.8 20.7 X >> ten-fingers 25.3 40.9 47.5 >> one-finger-savant 30.5 44.6 X >> two-finger-savant 28.3 44.1 X >> >> The code is attached if anyone wants to look at the individual >> sentence results (the function run-all-tests will run the models >> through all the conditions), but I wouldn't recommend it as a >> guide for how to write an ACT-R model. :) >> >> Here are the things which I found interesting. >> >> - The fastest overall was the ten fingered model in the "preparation >> free" case at 47.5 wpm, which is faster than I expected. >> >> - Testing "preparation free" actually lead to typing errors for the >> one- >> and two-fingered models since it was modifying the features before >> the >> last action had begun (the finger was trying to do two things at >> once). >> So, those models are skipped for that condition. >> >> - In the other cases the ten fingered model beats the "hunt and peck" >> models as expected, but the "savant" models were faster than the >> ten fingered one. So the savings in preparation time is better than >> the cost of the extra movement relative to the press-key actions >> with the default motor module parameters. However, from a >> plausibility >> standpoint what those savant models do seems pretty super human to >> me. >> >> >> Dan >> >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> ACT-R-users mailing list >> ACT-R-users at act-r.psy.cmu.edu >> http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users > > > > _______________________________________________ > ACT-R-users mailing list > ACT-R-users at act-r.psy.cmu.edu > http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard.samms at ieee.org Tue Aug 31 10:23:01 2010 From: richard.samms at ieee.org (Richard Samms) Date: Tue, 31 Aug 2010 10:23:01 -0400 Subject: [ACT-R-users] FYI Message-ID: <169A4A188BA6439494075572F554085B@Weierstrass> https://www.fbo.gov/index?s=opportunity&mode=form&tab=core&id=6a0e5742e5e53fb6c2d6f5ef0b75d2f2&_cview=0 See tech areas 2&3 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: baa-pkd-08-0006-call4.doc Type: application/msword Size: 205824 bytes Desc: not available URL: