From j.a.grange at psy.keele.ac.uk Wed Dec 1 11:59:06 2010 From: j.a.grange at psy.keele.ac.uk (j.a.grange at psy.keele.ac.uk) Date: Wed, 1 Dec 2010 16:59:06 -0000 (UTC) Subject: [ACT-R-users] ACT-R & E-Prime Message-ID: Dear community, Is there any way to get ACT-R models to interact with windows other than those generated by a LISP-coded experiment? For example, is there any way to program an experiment in other software (i.e. E-Prime)and have the model interact with this? As a novice modeler, my greatest problem is not generating the model side of the coding, but rather generating an experiment coded from the bottom-up using LISP with which the model can interact with and generate data. Any advice gratefully received, Many thanks, Jim Grange Keele University UK. From stu at agstechnet.com Wed Dec 1 12:21:33 2010 From: stu at agstechnet.com (Stu @ AGS TechNet) Date: Wed, 01 Dec 2010 12:21:33 -0500 Subject: [ACT-R-users] ACT-R & E-Prime In-Reply-To: References: Message-ID: <4CF6841D.10204@agstechnet.com> You can communicate between models using sockets. You may want to find someone experienced using sockets to help out. Stu On 12/1/2010 11:59 AM, j.a.grange at psy.keele.ac.uk wrote: > Dear community, > > Is there any way to get ACT-R models to interact with windows other than > those generated by a LISP-coded experiment? For example, is there any way > to program an experiment in other software (i.e. E-Prime)and have the > model interact with this? > > As a novice modeler, my greatest problem is not generating the model side > of the coding, but rather generating an experiment coded from the > bottom-up using LISP with which the model can interact with and generate > data. > > Any advice gratefully received, > > Many thanks, > > Jim Grange > Keele University > UK. > > _______________________________________________ > ACT-R-users mailing list > ACT-R-users at act-r.psy.cmu.edu > http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From db30 at andrew.cmu.edu Wed Dec 1 14:58:21 2010 From: db30 at andrew.cmu.edu (db30 at andrew.cmu.edu) Date: Wed, 01 Dec 2010 14:58:21 -0500 Subject: [ACT-R-users] ACT-R & E-Prime In-Reply-To: References: Message-ID: <69D16C803505BCA7648EE41D@act-r6.cmu.edu> --On Wednesday, December 01, 2010 4:59 PM +0000 j.a.grange at psy.keele.ac.uk wrote: > Dear community, > > Is there any way to get ACT-R models to interact with windows other than > those generated by a LISP-coded experiment? For example, is there any way > to program an experiment in other software (i.e. E-Prime)and have the > model interact with this? > Yes, it is possible to have a model interact with something other than the experiment tools provided. However, there are no alternative interfaces built into the ACT-R software so one would have to create such an interface for the model as needed. At a low level, the perceptual and motor modules of ACT-R interact with the world through what is called a device. The device is responsible for providing the visual percepts to the model and handling the motor and speech output which the model generates. Thus to create an interface to a different environment requires creating a device for the model. There is a set of slides titled "extending-actr" in the docs directory of the ACT-R 6.0 distribution which describes what is necessary to create a new device, and there are examples which go along with those slides found in the examples directory of the distribution. If that alternate world is also written in Lisp (as is the case for the example devices) then writing the device is essentially all that's necessary. However, if the external environment is outside of Lisp then there are other software issues involved which are beyond the scope of the ACT-R software. In particular, how to interface the device for the model to that external software depends on what sort of interface that external software provides and whether or not the Lisp being used has the necessary tools for connecting to and working with such an interface. Those sorts of things are going to require consulting other appropriate sources. Although there aren't any alternate interfaces included with the ACT-R software, I have seen announcements by people with claims about general interfaces for ACT-R over the years. Here are a few which I remember [I apologize to anyone who may have written something which I missed]: SegMan by Rob St. Amant ACT-CV by Marc Halbr?gge An ACT-R Java interface by Philippe B?ttner I have not tried any of them, so I can't say anything about their usefulness or applicability to your current needs. I hope that information helps, and if you have any more questions about ACT-R feel free to ask. Dan From bej at cs.cmu.edu Wed Dec 1 15:06:41 2010 From: bej at cs.cmu.edu (Bonnie John) Date: Wed, 01 Dec 2010 15:06:41 -0500 Subject: [ACT-R-users] ACT-R & E-Prime In-Reply-To: <69D16C803505BCA7648EE41D@act-r6.cmu.edu> References: <69D16C803505BCA7648EE41D@act-r6.cmu.edu> Message-ID: <4CF6AAD1.50204@cs.cmu.edu> If you have a really simple "world" that you want to interact with, that can be expressed in a storyboard and only changes when ACT-R interacts with it (e.g., like a web site that only changes pages when a user clicks on a link), you can mock it up in CogTool. CogTool can then create an ACT-R device model for you. It's a little tricky getting this device model out and interacting with it, we use it to interact with web pages all the time. (just links though, no fancy dynamic web pages) www.cogtool.org If you want to know more, let me know. Bonnie On 12/1/10 2:58 PM, db30 at andrew.cmu.edu wrote: > > > --On Wednesday, December 01, 2010 4:59 PM +0000 j.a.grange at psy.keele.ac.uk wrote: > >> Dear community, >> >> Is there any way to get ACT-R models to interact with windows other than >> those generated by a LISP-coded experiment? For example, is there any way >> to program an experiment in other software (i.e. E-Prime)and have the >> model interact with this? >> > > Yes, it is possible to have a model interact with something other than the > experiment tools provided. However, there are no alternative interfaces > built into the ACT-R software so one would have to create such an interface > for the model as needed. > > At a low level, the perceptual and motor modules of ACT-R interact with > the world through what is called a device. The device is responsible for > providing the visual percepts to the model and handling the motor and speech > output which the model generates. Thus to create an interface to a > different environment requires creating a device for the model. There is a > set of slides titled "extending-actr" in the docs directory of the ACT-R 6.0 > distribution which describes what is necessary to create a new device, and > there are examples which go along with those slides found in the examples > directory of the distribution. > > If that alternate world is also written in Lisp (as is the case for the > example devices) then writing the device is essentially all that's > necessary. However, if the external environment is outside of Lisp then > there are other software issues involved which are beyond the scope of the > ACT-R software. In particular, how to interface the device for the model to > that external software depends on what sort of interface that external > software provides and whether or not the Lisp being used has the necessary > tools for connecting to and working with such an interface. Those sorts of > things are going to require consulting other appropriate sources. > > Although there aren't any alternate interfaces included with the ACT-R > software, I have seen announcements by people with claims about general > interfaces for ACT-R over the years. Here are a few which I remember [I > apologize to anyone who may have written something which I missed]: > > SegMan by Rob St. Amant > ACT-CV by Marc Halbr?gge > An ACT-R Java interface by Philippe B?ttner > > > I have not tried any of them, so I can't say anything about their usefulness > or applicability to your current needs. > > > I hope that information helps, and if you have any more questions about > ACT-R feel free to ask. > > Dan > > > > _______________________________________________ > ACT-R-users mailing list > ACT-R-users at act-r.psy.cmu.edu > http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users > From Hongbin.Wang at uth.tmc.edu Wed Dec 1 15:44:42 2010 From: Hongbin.Wang at uth.tmc.edu (Hongbin Wang) Date: Wed, 1 Dec 2010 14:44:42 -0600 Subject: [ACT-R-users] ACT-R & E-Prime In-Reply-To: <69D16C803505BCA7648EE41D@act-r6.cmu.edu> References: <69D16C803505BCA7648EE41D@act-r6.cmu.edu> Message-ID: <6C789F48-D429-4A00-AA66-E758032ABA46@uth.tmc.edu> We have been developing a package in E-Prime which allows E-Prime to communicate (input/output) in real-time to outside systems (eg, eye-tracker, eeg) via sockets. Conceptually it is possible to use the similar mechanism to include ACT-R in the loop via socket/device. Hongbin On Dec 1, 2010, at 1:58 PM, wrote: > > > --On Wednesday, December 01, 2010 4:59 PM +0000 j.a.grange at psy.keele.ac.uk wrote: > >> Dear community, >> >> Is there any way to get ACT-R models to interact with windows other than >> those generated by a LISP-coded experiment? For example, is there any way >> to program an experiment in other software (i.e. E-Prime)and have the >> model interact with this? >> > > Yes, it is possible to have a model interact with something other than the > experiment tools provided. However, there are no alternative interfaces > built into the ACT-R software so one would have to create such an interface > for the model as needed. > > At a low level, the perceptual and motor modules of ACT-R interact with > the world through what is called a device. The device is responsible for > providing the visual percepts to the model and handling the motor and speech > output which the model generates. Thus to create an interface to a > different environment requires creating a device for the model. There is a > set of slides titled "extending-actr" in the docs directory of the ACT-R 6.0 > distribution which describes what is necessary to create a new device, and > there are examples which go along with those slides found in the examples > directory of the distribution. > > If that alternate world is also written in Lisp (as is the case for the > example devices) then writing the device is essentially all that's > necessary. However, if the external environment is outside of Lisp then > there are other software issues involved which are beyond the scope of the > ACT-R software. In particular, how to interface the device for the model to > that external software depends on what sort of interface that external > software provides and whether or not the Lisp being used has the necessary > tools for connecting to and working with such an interface. Those sorts of > things are going to require consulting other appropriate sources. > > Although there aren't any alternate interfaces included with the ACT-R > software, I have seen announcements by people with claims about general > interfaces for ACT-R over the years. Here are a few which I remember [I > apologize to anyone who may have written something which I missed]: > > SegMan by Rob St. Amant > ACT-CV by Marc Halbr?gge > An ACT-R Java interface by Philippe B?ttner > > > I have not tried any of them, so I can't say anything about their usefulness > or applicability to your current needs. > > > I hope that information helps, and if you have any more questions about > ACT-R feel free to ask. > > Dan > > > > _______________________________________________ > ACT-R-users mailing list > ACT-R-users at act-r.psy.cmu.edu > http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users From holtzermann17 at gmail.com Wed Dec 8 14:23:26 2010 From: holtzermann17 at gmail.com (Joe Corneli) Date: Wed, 8 Dec 2010 19:23:26 +0000 Subject: [ACT-R-users] act-r and social data for a problem-solving application Message-ID: Hello: I am in the planning phase for a Ph. D. project that may involve applying ACT-R to problem-solving in a social context (PlanetMath.org). At present I could use your help with two different kinds of leads: (1) Info about ACT-R and social data in general; (2) Code and content in which ACT-R is applied to mathematical problem solving. I'll say a bit more about what I'm looking for below. 1: I've found a couple of references that relate ACT-R to "social data": Lebiere, C. (2002). Modeling group decision making in the ACT-R cognitive architecture. In Proceedings of the 2002 Computational Social and Organizational Science (CASOS). June 21-23, Pittsburgh, PA. Simulating Adaptive Communication. Michael Matessa. Fall, 2000. Department of Psychology. Carnegie Mellon University. Pittsburgh, PA. http://www.matessa.org/~mike/matessa-thesis.pdf For my project, I hope to model a very wide range of actions that are related to problem solving, e.g.: sharing (uploading new exercises or solutions), connecting (adding links to encyclopedia content or to other problems), discussing (talking about a problem or solution, asking for help with a class of problems), as well as more "classic" ITS-related activities. Can you suggest any additional references that apply ACT-R (or related systems) to model complexes of activities like these? 2: There is quite a bit of literature about ACT-R being applied to mathematical problem solving, but are any of the corresponding modules available in an open source form that I can get my hands dirty with? Regards, Joe Corneli From mmatessa at alionscience.com Wed Dec 8 14:47:28 2010 From: mmatessa at alionscience.com (Matessa, Michael ) Date: Wed, 8 Dec 2010 14:47:28 -0500 Subject: [ACT-R-users] act-r and social data for a problem-solving application References: Message-ID: <7E82E10EFB52874DA956FF1ADD73766B21ECB5@email3a.alionscience.com> Hi Joe, For models of discussion, Jerry Ball (http://act-r.psy.cmu.edu/people/index.php?id=184 ) is doing great work on language comprehension and generation. -Mike ________________________________ From: Joe Corneli [mailto:holtzermann17 at gmail.com] Sent: Wed 12/8/2010 11:23 AM To: act-r-users at act-r.psy.cmu.edu Cc: Matessa, Michael ; cl at cmu.edu Subject: act-r and social data for a problem-solving application Hello: I am in the planning phase for a Ph. D. project that may involve applying ACT-R to problem-solving in a social context (PlanetMath.org). At present I could use your help with two different kinds of leads: (1) Info about ACT-R and social data in general; (2) Code and content in which ACT-R is applied to mathematical problem solving. I'll say a bit more about what I'm looking for below. 1: I've found a couple of references that relate ACT-R to "social data": Lebiere, C. (2002). Modeling group decision making in the ACT-R cognitive architecture. In Proceedings of the 2002 Computational Social and Organizational Science (CASOS). June 21-23, Pittsburgh, PA. Simulating Adaptive Communication. Michael Matessa. Fall, 2000. Department of Psychology. Carnegie Mellon University. Pittsburgh, PA. http://www.matessa.org/~mike/matessa-thesis.pdf For my project, I hope to model a very wide range of actions that are related to problem solving, e.g.: sharing (uploading new exercises or solutions), connecting (adding links to encyclopedia content or to other problems), discussing (talking about a problem or solution, asking for help with a class of problems), as well as more "classic" ITS-related activities. Can you suggest any additional references that apply ACT-R (or related systems) to model complexes of activities like these? 2: There is quite a bit of literature about ACT-R being applied to mathematical problem solving, but are any of the corresponding modules available in an open source form that I can get my hands dirty with? Regards, Joe Corneli From sasmekoll at gmail.com Sun Dec 12 21:24:31 2010 From: sasmekoll at gmail.com (Vovan) Date: Mon, 13 Dec 2010 05:24:31 +0300 Subject: [ACT-R-users] =?utf-8?b?0J3QvtCy0L7QtSDRgdC+0L7QsdGJ0LXQvdC40LU=?= Message-ID: <4d064f76.c406cc0a.2372.1979@mx.google.com> http://samec.org.ua/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sasmekoll at gmail.com Sun Dec 12 21:26:50 2010 From: sasmekoll at gmail.com (Vovan) Date: Mon, 13 Dec 2010 05:26:50 +0300 Subject: [ACT-R-users] =?utf-8?b?0J3QvtCy0L7QtSDRgdC+0L7QsdGJ0LXQvdC40LU=?= Message-ID: <4d064ff4.1211cc0a.3573.1887@mx.google.com> http://samec.org.ua/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jerryv at andrew.cmu.edu Mon Dec 13 12:02:58 2010 From: jerryv at andrew.cmu.edu (Jerry Vinokurov) Date: Mon, 13 Dec 2010 12:02:58 -0500 Subject: [ACT-R-users] =?utf-8?b?0J3QvtCy0L7QtSDRgdC+0L7QsdGJ0LXQvdC40LU=?= In-Reply-To: <4d064f76.c406cc0a.2372.1979@mx.google.com> References: <4d064f76.c406cc0a.2372.1979@mx.google.com> Message-ID: <4D0651C2.2090102@andrew.cmu.edu> On 12/12/10 9:24 PM, Vovan wrote: > http://samec.org.ua/ > > > _______________________________________________ > ACT-R-users mailing list > ACT-R-users at act-r.psy.cmu.edu > http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users We appear to be getting spammed :( Jerry -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tiffany.Jastrzembski at wpafb.af.mil Mon Dec 13 12:26:51 2010 From: Tiffany.Jastrzembski at wpafb.af.mil (Jastrzembski, Tiffany S Civ USAF AFMC 711 HPW/RHAC) Date: Mon, 13 Dec 2010 12:26:51 -0500 Subject: [ACT-R-users] BRIMS 2011 Submission Deadline Approaching! Message-ID: <9AC197D8D0788140BC98A478FB3852A85E9507@VFOHMLMC11.Enterprise.afmc.ds.af.mil> A non-text attachment was scrubbed... Name: smime.p7m Type: application/x-pkcs7-mime Size: 35124 bytes Desc: not available URL: From Tiffany.Jastrzembski at wpafb.af.mil Mon Dec 13 13:13:49 2010 From: Tiffany.Jastrzembski at wpafb.af.mil (Jastrzembski, Tiffany S Civ USAF AFMC 711 HPW/RHAC) Date: Mon, 13 Dec 2010 13:13:49 -0500 Subject: [ACT-R-users] BRIMS 2011 Submission Deadline Approaching! Message-ID: <9AC197D8D0788140BC98A478FB3852A85E954A@VFOHMLMC11.Enterprise.afmc.ds.af.mil> (Best viewed in HTML; Apologies for Cross-Postings) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~ The BRIMS 2011 Submission Deadline is December 21, 2010 - just 8 days away! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~ For details on submission content and guidelines, please navigate to: http://brimsconference.org/wp-content/uploads/2008/09/BRIMS_2011_Call_fo r_Papers.pdf For submission templates, please navigate to: http://brimsconference.org/wp-content/uploads/2008/10/Authors-Template-f or-BRIMS-Submissions.doc You are invited to participate in the 20th Conference on Behavior Representation in Modeling and Simulation (BRIMS), to be held at the Sundance Resort in Sundance, UT. BRIMS enables modeling and simulation research scientists, engineers, and technical communities across disciplines to meet, share ideas, identify capability gaps, discuss cutting-edge research directions, highlight promising technologies, and showcase the state-of-the-art in Department of Defense related applications. The BRIMS Conference will consist of many exciting elements in 2011, including special topic areas, technical paper sessions, special symposia/panel discussions, and government laboratory sponsor sessions. Highlights of BRIMS 2011 include a fantastic and eclectic lineup of keynote speakers spanning cognitive modeling, sociocultural modeling, and network science: John Laird, PhD University of Michigan, http://ai.eecs.umich.edu/people/laird/ Lael Schooler, Phd Max Planck Institute, http://ntfm.mpib-berlin.mpg.de/mpib/FMPro Kathleen Carley, PhD Carnegie Mellon University, http://www.casos.cs.cmu.edu/bios/carley/carley.html Chris Barrett, PhD Virginia Tech, http://ndssl.vbi.vt.edu/people/cbarrett.html The BRIMS Executive Committee invites papers, posters, demos, symposia, panel discussions, and tutorials on topics related to the representation of individuals, groups, teams and organizations in models and simulations. All submissions are peer-reviewed (see www.brimsconference.org for additional details on submission types). Key Dates: All submissions due: December 21, 2010 Tutorial Acceptance: January 31, 2011 Authors Notification January 31, 2011 Final version due: February 18, 2011 Tutorials held: March 21, 2011 BRIMS 2010 Opens: March 22, 2011 Special Topic Areas of Interest are identified to elicit specific technical content: * M&S in network science * Statistical/Graphical approaches to M&S * M&S for asymmetric warfare and joint force applications * Cognitive or behavioral performance moderators in M&S * Integration and reuse of models * Large-scale, persistent, and generative modeling issues General Topic Areas of Interest include, but are not limited to: Modeling * Intelligent agents and avatars/adversarial modeling * Cognitive robots and human-robot interaction * Models of reasoning and decision making * Model validation & comparison * Socio-cultural M&S: team/group/crowd/ behavior * Physical models of human movement * Performance assessment and skill monitoring/tracking * Performance prediction/enhancement/optimization * Intelligent tutoring systems * Knowledge acquisition/engineering * Human behavior issues in model federations Simulation * Synthetic environments for human behavior representation * Terrain representation and reasoning * Spatial reasoning * Time representation * Human behavior usability and interoperability * Efficiency, usability, affordability issues * Operator interfaces * Multi-resolution/fidelity simulations * Science of simulation issues ACCOMMODATIONS and REGISTRATION The conference will be held at the Sundance Resort in Sundance, UT. Visit www.sundanceresort.com for general information about the site and accommodations. Conference and hotel registration, general area, and travel information can be found at www.brimsconference.org. BRIMS PROGAM COMMITTEE: Bradley J. Best (Adaptive Cognitive Systems) William G. Kennedy (George Mason University) Frank E. Ritter (Pennsylvania State University) BRIMS EXECUTIVE COMMITTEE: Joe Armstrong (CAE), Brad Cain (Defence Research and Development Canada), Bruno Emond (National Research Council Canada), Coty Gonzalez (Carnegie Mellon University), Brian Gore (NASA), Jeff Hansberger (Army Research Laboratory), Kenneth Kwok (DSO National Laboratories, Singapore), John Laird (University of Michigan), Christian Lebiere (Carnegie Mellon University), Christopher Myers (Air Force Research Laboratory), Bharat Patel (Defence Science and Technology Laboratory, UK), Sylvain Pronovost (Carleton University & CAE), Venkat Sastry (University of Cranfield), Barry Silverman (University of Pennsylvania), Neil Smith (QinetiQ), LtCol David Sonntag (AOARD), Webb Stacy (Aptima), Mike van Lent (SoarTech), Walter Warwick (Alion Science and Technology), Jason Wong (Naval Undersea Warfare Center), Patrick Xavier (Sandia National Laboratories) A special thanks to the BRIMS 2011 Government Sponsors for their support of this event: Air Force Research Laboratory, Army Research Laboratory, DARPA, Office of Naval Research, Natick Soldier Center, NASA, and the UK Ministry of Defence. If you have any questions or concerns, please contact the BRIMS 2011 Conference Chair, Dr. Tiffany Jastrzembski (tiffany.jastrzembski at wpafb.af.mil). ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Tiffany S. Jastrzembski, Ph.D. Cognitive Research Scientist Air Force Research Laboratory 2698 G Street, Building 190 Wright-Patterson AFB, OH 45433-7604 Phone: (937) 255-2085 tiffany.jastrzembski at wpafb.af.mil -------------- next part -------------- An HTML attachment was scrubbed... URL: From reitter at cmu.edu Fri Dec 17 09:27:32 2010 From: reitter at cmu.edu (David Reitter) Date: Fri, 17 Dec 2010 09:27:32 -0500 Subject: [ACT-R-users] Call for Papers: Cognitive Modeling and Computational Linguistics Message-ID: <88C51F30-524D-40A9-8A5E-305400630E73@cmu.edu> Cognitive Modeling and Computational Linguistics (CMCL) and TopiCS special issue Models of Language Comprehension A workshop to be held June 23, 2011 at the Association for Computational Linguistics meeting in Portland, Oregon http://www.psy.cmu.edu/~cmcl/ CALL FOR PAPERS Workshop Description This workshop provides a venue for work in computational psycholinguistics. ACL Lifetime Achievement Award recipient Martin Kay described this topic as "build[ing] models of language that reflect in some interesting way, on the ways in which people use language." The 2010 workshop follows in the tradition of several previous meetings (1) the computational psycholinguistics meeting at CogSci in Berkeley in 1997 (2) the Incremental Parsing workshop at ACL 2004 (3) the first CMCL workshop at ACL 2010 in inviting contributions that apply methods from computational linguistics to problems in the cognitive modeling of any and all natural language abilities. Scope and Topics The workshop invites a broad spectrum of work in the cognitive science of language, at all levels of analysis from sounds to discourse. Topics include, but are not limited to * incremental parsers for diverse grammar formalisms; models of comprehension difficulty derived from such parsers * models of factors favoring particular productions or interpretations over their competitors * models of semantic interpretation, including psychologically realistic notions of word and phrase meaning * models of human language acquisition, including the prediction of generalizations and time course in acquisition * applications of cognitive models of language, e.g., in tutoring systems, human evaluation, clinical and cognitive neuroscience settings Submissions This call solicits 8-page, full papers reporting original and unpublished research that combines cognitive modeling and computational linguistics. Accepted papers are expected to be presented at the workshop and will be published in the workshop proceedings. They should emphasize obtained results rather than intended work, and should indicate clearly the state of completion of the reported results. A paper accepted for presentation at the workshop must not be presented or have been presented at any other meeting with publicly available proceedings. If essentially identical papers are submitted to other conferences or workshops as well, this fact must be indicated at submission time. To facilitate double-blind reviewing, submitted paper should not include any identifying information about the authors. Submissions must be formatted using ACL 2011 style files available at http://www.acl2011.org/latex/ http://www.acl2011.org/word/ Contributions should be submitted in PDF via the submission site: https://www.softconf.com/acl2011/CogModCL The submission deadline is 11:59PM Eastern Time on April 01, 2011. Pathway to Journal Publication All accepted CMCL papers will be published in the workshop proceedings as is customary at ACL. However, CMCL presenters whose work holds broad interest for the wider cognitive science community will be encouraged to prepare extended versions of their papers (16 pages in APA format). If approved by a second round of reviewing, these extended papers will appear in a forthcoming issue of TopiCS, a Journal of the Cognitive Science Society, entitled entitled "Models of Language Comprehension". These expanded papers will need to be substantially adapted to address the broader TopiCS readership. The Program Committee will be assisted by additional experts, as needed, to apply this and other review criteria. Important Dates Submission deadline: April 01, 2011 Notification of acceptance: April 25, 2011 Camera-ready versions due: May 06, 2011 Workshop: June 23, 2011, at ACL 2011 Workshop Chairs Frank Keller, School of Informatics, University of Edinburgh David Reitter, Department of Psychology, Carnegie Mellon University Program Committee Steven Abney Michigan Harald R. Baayen Alberta Matthew Crocker Saarland Vera Demberg Saarland Tim O'Donnell Harvard Amit Dubey Edinburgh Mike Frank Stanford Ted Gibson MIT John Hale Cornell Keith Hall Google Florian Jaeger Rochester Lars Konieczny Freiburg Roger Levy San Diego Richard Lewis Michigan Stephan Oepen Oslo Ulrike Pado VICO Research Douglas Roland Buffalo William Schuler Ohio State Mark Steedman Edinburgh Patrick Sturt Edinburgh Shravan Vasishth Potsdam From mchan at inf.ed.ac.uk Fri Dec 17 19:48:14 2010 From: mchan at inf.ed.ac.uk (Michael Chan) Date: Sat, 18 Dec 2010 00:48:14 +0000 Subject: [ACT-R-users] 1st CFP: IJCAI-11 Workshop on Discovering Meaning On the Go in Large & Heterogeneous Data (LHD-11) Message-ID: <4D0C04CE.3080703@inf.ed.ac.uk> Apologies for cross-posting ------------------------------------------------------------------------------ Call for papers for LHD-11 workshop at IJCAI-11, July 2011, Barcelona: Discovering Meaning On the Go in Large & Heterogeneous Data http://dream.inf.ed.ac.uk/events/lhd-11/ ------------------------------------------------------------------------------ An interdisciplinary approach is necessary to discover and match meaning dynamically in a world of increasingly large data. This workshop aims to bring together practitioners from academia, industry and government for interaction and discussion. The workshop will feature: * A panel discussion representing industrial and governmental input, entitled "Big Society meets Big Data: Industry and Government Applications of Mapping Meaning". Panel members will include: * Peter Mika (Yahoo!) * Alon Halevy (Google) * Tom McCutcheon (Dstl) * (tbc) * An invited talk from Fausto Giunchglia, discussing the relationship between social computing and ontology matching; * Paper and poster presentations; * Workshop sponsored by: Yahoo! Research, W3C and others Workshop Description The problem of semantic alignment - that of two systems failing to understand one another when their representations are not identical - occurs in a huge variety of areas: Linked Data, database integration, e-science, multi-agent systems, information retrieval over structured data; anywhere, in fact, where semantics or a shared structure are necessary but centralised control over the schema of the data sources is undesirable or impractical. Yet this is increasingly a critical problem in the world of large scale data, particularly as more and more of this kind of data is available over the Web. In order to interact successfully in an open and heterogeneous environment, being able to dynamically and adaptively integrate large and heterogeneous data from the Web "on the go" is necessary. This may not be a precise process but a matter of finding a good enough integration to allow interaction to proceed successfully, even if a complete solution is impossible. Considerable success has already been achieved in the field of ontology matching and merging, but the application of these techniques - often developed for static environments - to the dynamic integration of large-scale data has not been well studied. Presenting the results of such dynamic integration to both end-users and database administrators - while providing quality assurance and provenance - is not yet a feature of many deployed systems. To make matters more difficult, on the Web there are massive amounts of information available online that could be integrated, but this information is often chaotically organised, stored in a wide variety of data-formats, and difficult to interpret. This area has been of interest in academia for some time, and is becoming increasingly important in industry and - thanks to open data efforts and other initiatives - to government as well. The aim of this workshop is to bring together practitioners from academia, industry and government who are involved in all aspects of this field: from those developing, curating and using Linked Data, to those focusing on matching and merging techniques. Topics of interest include, but are not limited to: * Integration of large and heterogeneous data * Machine-learning over structured data * Ontology evolution and dynamics * Ontology matching and alignment * Presentation of dynamically integrated data * Incentives and human computation over structured data and ontologies * Ranking and search over structured and semi-structured data * Quality assurance and data-cleansing * Vocabulary management in Linked Data * Schema and ontology versioning and provenance * Background knowledge in matching * Extensions to knowledge representation languages to better support change * Inconsistency and missing values in databases and ontologies * Dynamic knowledge construction and exploitation * Matching for dynamic applications (e.g., p2p, agents, streaming) * Case studies, software tools, use cases, applications * Open problems * Foundational issues Applications and evaluations on data-sources that are from the Web and Linked Data are particularly encouraged. Submission LHD-11 invites submissions of both full length papers of no more than 6 pages and position papers of 1-3 pages. Authors of full-papers which are considered to be both of a high quality and of broad interest to most attendees will be invited to give full presentations; authors of more position papers will be invited to participate in "group panels" and in a poster session. All accepted papers (both position and full length papers) will be published as part of the IJCAI workshop proceedings, and will be available online from the workshop website. After the workshop, we will be publishing a special issue of the Artificial Intelligence Review and authors of the best quality submissions will be invited to submit extended versions of their papers (subject to the overall standard of submissions being appropriately high). All contributions should be in pdf format and should be uploaded via http://www.easychair.org/conferences/?conf=lhd11. Authors should follow the IJCAI author instructions http://ijcai-11.iiia.csic.es/calls/formatting_instructions. Important Dates Abstract submission: March 14, 2011 Notification: April 25, 2011 Camera ready: May 16, 2011 Early registration: TBA Late registration: TBA Workshop: 16th July, 2011 Organising Committee: Fiona McNeill (University of Edinburgh) Harry Halpin (Yahoo! Research) Michael Chan (University of Edinburgh) Program committee: Marcelo Arenas (Pontificia Universidad Catolica de Chile) Krisztian Balog (University of Amsterdam) Paolo Besana (University of Edinburgh) Roi Blanco (Yahoo! Research) Paolo Bouquet (University of Trento) Ulf Brefeld (Yahoo! Research) Alan Bundy (University of Edinburgh) Ciro Cattuto (ISI Foundation) Vinay Chaudri (SRI) James Cheney (University of Edinburgh) Oscar Corcho (Universidad Polit?cnica de Madrid) Shady Elbassuoni (Max-Planck-Institut f?r Informatik) Jerome Euzenat (INRIA Grenoble Rhone-Alpes) Eraldo Fernandez (Pontif?cia Universidade Cat?lica do Rio de Janeiro) Aldo Gangemi (CNR) Pat Hayes (IHMC) Ivan Herman (W3C) Tom McCutcheon (Dstl) Shuai Ma (Beihang University) Ashok Malhorta (Oracle) Daniel Miranker (University of Texas-Austin) Adam Pease (Articulate Software) Valentina Presutti (CNR) David Roberston (University of Edinburgh) Juan Sequeda (University of Texas-Austin) Pavel Shvaiko (Informatica Trentina) Jamie Taylor (Google) Eveylne Viegas (Microsoft Research) -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From mchan at inf.ed.ac.uk Fri Dec 17 21:38:34 2010 From: mchan at inf.ed.ac.uk (Michael Chan) Date: Sat, 18 Dec 2010 02:38:34 +0000 Subject: [ACT-R-users] 1st CFP: IJCAI-11 Workshop on Discovering Meaning On the Go in Large & Heterogeneous Data (LHD-11) Message-ID: <4D0C1EAA.4000803@inf.ed.ac.uk> An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From Tiffany.Jastrzembski at wpafb.af.mil Sun Dec 19 13:01:32 2010 From: Tiffany.Jastrzembski at wpafb.af.mil (Jastrzembski, Tiffany S Civ USAF AFMC 711 HPW/RHAC) Date: Sun, 19 Dec 2010 13:01:32 -0500 Subject: [ACT-R-users] BRIMS 2011 Submission Deadline Extension! Message-ID: <9AC197D8D0788140BC98A478FB3852A861FFEC@VFOHMLMC11.Enterprise.afmc.ds.af.mil> (Best viewed in HTML; Apologies for Cross-Postings) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The BRIMS 2011 Submission Deadline has been extended to January 6, 2011 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For details on submission content and guidelines, please navigate to: http://brimsconference.org/wp-content/uploads/2008/09/BRIMS_2011_Call_fo r_Papers.pdf For submission templates, please navigate to: http://brimsconference.org/wp-content/uploads/2008/10/Authors-Template-f or-BRIMS-Submissions.doc You are invited to participate in the 20th Conference on Behavior Representation in Modeling and Simulation (BRIMS), to be held at the Sundance Resort in Sundance, UT. BRIMS enables modeling and simulation research scientists, engineers, and technical communities across disciplines to meet, share ideas, identify capability gaps, discuss cutting-edge research directions, highlight promising technologies, and showcase the state-of-the-art in Department of Defense related applications. The BRIMS Conference will consist of many exciting elements in 2011, including special topic areas, technical paper sessions, special symposia/panel discussions, and government laboratory sponsor sessions. Highlights of BRIMS 2011 include a fantastic and eclectic lineup of keynote speakers spanning cognitive modeling, sociocultural modeling, and network science: John Laird, PhD University of Michigan, http://ai.eecs.umich.edu/people/laird/ Lael Schooler, Phd Max Planck Institute, http://ntfm.mpib-berlin.mpg.de/mpib/FMPro Kathleen Carley, PhD Carnegie Mellon University, http://www.casos.cs.cmu.edu/bios/carley/carley.html Chris Barrett, PhD Virginia Tech, http://ndssl.vbi.vt.edu/people/cbarrett.html The BRIMS Executive Committee invites papers, posters, demos, symposia, panel discussions, and tutorials on topics related to the representation of individuals, groups, teams and organizations in models and simulations. All submissions are peer-reviewed (see www.brimsconference.org for additional details on submission types). Key Dates: All submissions due: December 21, 2010 Tutorial Acceptance: January 31, 2011 Authors Notification January 31, 2011 Final version due: February 18, 2011 Tutorials held: March 21, 2011 BRIMS 2010 Opens: March 22, 2011 Special Topic Areas of Interest are identified to elicit specific technical content: * M&S in network science * Statistical/Graphical approaches to M&S * M&S for asymmetric warfare and joint force applications * Cognitive or behavioral performance moderators in M&S * Integration and reuse of models * Large-scale, persistent, and generative modeling issues General Topic Areas of Interest include, but are not limited to: Modeling * Intelligent agents and avatars/adversarial modeling * Cognitive robots and human-robot interaction * Models of reasoning and decision making * Model validation & comparison * Socio-cultural M&S: team/group/crowd/ behavior * Physical models of human movement * Performance assessment and skill monitoring/tracking * Performance prediction/enhancement/optimization * Intelligent tutoring systems * Knowledge acquisition/engineering * Human behavior issues in model federations Simulation * Synthetic environments for human behavior representation * Terrain representation and reasoning * Spatial reasoning * Time representation * Human behavior usability and interoperability * Efficiency, usability, affordability issues * Operator interfaces * Multi-resolution/fidelity simulations * Science of simulation issues ACCOMMODATIONS and REGISTRATION The conference will be held at the Sundance Resort in Sundance, UT. Visit www.sundanceresort.com for general information about the site and accommodations. Conference and hotel registration, general area, and travel information can be found at www.brimsconference.org. BRIMS PROGAM COMMITTEE: Bradley J. Best (Adaptive Cognitive Systems) William G. Kennedy (George Mason University) Frank E. Ritter (Pennsylvania State University) BRIMS EXECUTIVE COMMITTEE: Joe Armstrong (CAE), Brad Cain (Defence Research and Development Canada), Bruno Emond (National Research Council Canada), Coty Gonzalez (Carnegie Mellon University), Brian Gore (NASA), Jeff Hansberger (Army Research Laboratory), Kenneth Kwok (DSO National Laboratories, Singapore), John Laird (University of Michigan), Christian Lebiere (Carnegie Mellon University), Christopher Myers (Air Force Research Laboratory), Bharat Patel (Defence Science and Technology Laboratory, UK), Sylvain Pronovost (Carleton University & CAE), Venkat Sastry (University of Cranfield), Barry Silverman (University of Pennsylvania), Neil Smith (QinetiQ), LtCol David Sonntag (AOARD), Webb Stacy (Aptima), Mike van Lent (SoarTech), Walter Warwick (Alion Science and Technology), Jason Wong (Naval Undersea Warfare Center), Patrick Xavier (Sandia National Laboratories) A special thanks to the BRIMS 2011 Government Sponsors for their support of this event: Air Force Research Laboratory, Army Research Laboratory, DARPA, Office of Naval Research, Natick Soldier Center, NASA, and the UK Ministry of Defence. If you have any questions or concerns, please contact the BRIMS 2011 Conference Chair, Dr. Tiffany Jastrzembski (tiffany.jastrzembski at wpafb.af.mil). ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Tiffany S. Jastrzembski, Ph.D. Cognitive Research Scientist Air Force Research Laboratory 2698 G Street, Building 190 Wright-Patterson AFB, OH 45433-7604 Phone: (937) 255-2085 tiffany.jastrzembski at wpafb.af.mil -------------- next part -------------- An HTML attachment was scrubbed... URL: From icnc-fskd-cfp at dhu.edu.cn Sat Dec 18 17:52:22 2010 From: icnc-fskd-cfp at dhu.edu.cn (Bing Li) Date: Sun, 19 Dec 2010 06:52:22 +0800 Subject: [ACT-R-users] ICNC'11-FSKD'11 Submission Deadline 10 January: EI Compendex/ISI/IEEE Xplore Message-ID: <492713442.31411@eyou.net> Dear Colleague, We cordially invite you to submit a paper or invited session proposal to the upcoming 7th International Conference on Natural Computation (ICNC'11) and the 8th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD'11), to be jointly held from 26-28 July 2011, in Shanghai, China. Shanghai, an open city on the coast and a famous historical and cultural city, is a gate to the Yangtze River delta. It is a municipality under the direct jurisdiction of the Central Government, the largest economic and trade center, a comprehensive industrial base and the leading port in China. Attractions include Yuyuan Garden ("Happy Garden" built in Ming Dynasty), Shanghai Museum with 120,000 pieces of rare relics, Shanghai World Financial Center, Jade Buddha Temple (Song Dynasty), Oriental Pearl TV Tower, Zhujiajiao Water Town, and Expo 2010 site. All papers in conference proceedings will be indexed by both EI Compendex and ISTP, as well as included in the IEEE Xplore (IEEE Conference Record Number for ICNC?11: 18082; IEEE Conference Record Number for FSKD?11: 18083). Extended versions of selected best papers will appear in an ICNC-FSKD special issue of International Journal of Intelligent Systems, an SCI-indexed journal (Impact Factor: 1.194). ICNC-FSKD is a premier international forum for scientists and researchers to present the state of the art of data mining and intelligent methods inspired from nature, particularly biological, linguistic, and physical systems, with applications to signal processing, design, and more. Previously, the joint conferences in 2005 through 2010 each attracted over 3000 submissions from around the world. ICNC'11-FSKD'11 is technically co-sponsored by the IEEE Circuits and Systems Society. The registration fee of US*D 390 includes proceedings, lunches, dinners, banquet, coffee breaks, and all technical sessions. To promote international participation of researchers from outside the country/region where the conference is held (i.e., China?s mainland), researchers outside of China?s mainland are encouraged to propose invited sessions. The first author of each paper in an invited session must not be affiliated with an organization in China?s mainland. All papers in the invited sessions can be marked as "Invited Paper". One organizer for each invited session with at least 6 registered papers will enjoy an honorarium of US*D 400. Invited session organizers will solicit submissions, conduct reviews and recommend accept/reject decisions on the submitted papers. Invited session organizers will be able to set their own submission and review schedules, as long as a list of recommended papers is determined by 30 March 2010. Each invited session proposal should include: (1) the name, bio, and contact information of each organizer of the invited session; (2) the title and a short synopsis of the invited session. Please send your proposal to icnc-fskd at dhu.edu.cn For more information, visit the conference web page: http://icnc-fskd.dhu.edu.cn If you have any questions after visiting the conference web page, please email the secretariat at icnc-fskd at dhu.edu.cn Join us at this major event in exciting Shanghai !!! Organizing Committee icnc-fskd at dhu.edu.cn P.S.: Kindly forward to your colleagues and students in your school/department. If you wish to unsubscribe, in which case we apologize, please reply with " unsubscribe act-r-users at andrew.cmu.edu " in your email subject. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stu at agstechnet.com Mon Dec 20 11:09:01 2010 From: stu at agstechnet.com (Stu @ AGS TechNet) Date: Mon, 20 Dec 2010 11:09:01 -0500 Subject: [ACT-R-users] Print out specific buffers at a specific model run time Message-ID: <4D0F7F9D.8090503@agstechnet.com> **Sorry for the double post but I originally sent this from the wrong (non-member) email account** ACT-R Users, I would like to print out the contents (chunk slots and values) for a specific set of buffers (about 7 total) at a specific model run time. (For example, at 15.4 seconds, print out the contents of goal, imaginal, retrieval, visual, etc) Is there a straightforward approach to do this. I can't find it in the reference guide. Thanks in advance! -------------- next part -------------- An HTML attachment was scrubbed... URL: From db30 at andrew.cmu.edu Mon Dec 20 11:39:41 2010 From: db30 at andrew.cmu.edu (db30 at andrew.cmu.edu) Date: Mon, 20 Dec 2010 11:39:41 -0500 Subject: [ACT-R-users] Print out specific buffers at a specific model run time In-Reply-To: <4D0F7F9D.8090503@agstechnet.com> References: <4D0F7F9D.8090503@agstechnet.com> Message-ID: <7C5B437DBC5AE376047FA065@act-r6.psy.cmu.edu> --On Monday, December 20, 2010 11:09 AM -0500 "Stu @ AGS TechNet" wrote: > **Sorry for the double post but I originally sent this from the wrong (non-member) > email account** > > ACT-R Users, > > I would like to print out the contents (chunk slots and values) for a specific set > of buffers (about 7 total) at a specific model run time. (For example, at 15.4 seconds, > print out the contents of goal, imaginal, retrieval, visual, etc) > > Is there a straightforward approach to do this. > I can't find it in the reference guide. > Thanks in advance! > The buffer-chunk command prints out the details of the chunks in buffers for all the buffers provided (or all buffers if none provided). Thus, this: (buffer-chunk goal imaginal visual) would print out the details of the chunks in the goal, imaginal, and visual buffers. If you want something to happen at a particular time in the model run then you need to use one of the scheduling functions described in the reference manual. If you know the exact time then you can use schedule-event. It takes a time and a function to call (which could be a lambda specified inline). So, if you wanted to have the details of the goal and retrieval buffers printed at time 15.4 seconds you could add this to the setup code or model definition: (schedule-event 15.4 (lambda () (buffer-chunk goal retrieval))) Since that is presumably for debugging purposes you probably also want to specify the :maintenance flag on the event as true: (schedule-event 15.4 (lambda () (buffer-chunk goal retrieval)) :maintenance t) That will cause the model to ignore that event. If you don't specify that then the model could behave differently when you have that event scheduled relative to when you don't and that's usually not a good thing. Dan From stu at agstechnet.com Mon Dec 20 13:49:56 2010 From: stu at agstechnet.com (Stu @ AGS TechNet) Date: Mon, 20 Dec 2010 13:49:56 -0500 Subject: [ACT-R-users] Print out specific buffers at a specific model run time In-Reply-To: <7C5B437DBC5AE376047FA065@act-r6.psy.cmu.edu> References: <4D0F7F9D.8090503@agstechnet.com> <7C5B437DBC5AE376047FA065@act-r6.psy.cmu.edu> Message-ID: <4D0FA554.8050403@agstechnet.com> That's just the ticket! Thanks! On 12/20/2010 11:39 AM, db30 at andrew.cmu.edu wrote: > > > --On Monday, December 20, 2010 11:09 AM -0500 "Stu @ AGS TechNet" > wrote: > >> **Sorry for the double post but I originally sent this from the wrong >> (non-member) >> email account** >> >> ACT-R Users, >> >> I would like to print out the contents (chunk slots and values) >> for a specific set >> of buffers (about 7 total) at a specific model run time. (For >> example, at 15.4 seconds, >> print out the contents of goal, imaginal, retrieval, visual, etc) >> >> Is there a straightforward approach to do this. >> I can't find it in the reference guide. >> Thanks in advance! >> > > The buffer-chunk command prints out the details of the chunks in > buffers for > all the buffers provided (or all buffers if none provided). Thus, this: > > (buffer-chunk goal imaginal visual) > > would print out the details of the chunks in the goal, imaginal, and > visual buffers. > > If you want something to happen at a particular time in the model run > then you > need to use one of the scheduling functions described in the reference > manual. > If you know the exact time then you can use schedule-event. It takes > a time > and a function to call (which could be a lambda specified inline). > So, if > you wanted to have the details of the goal and retrieval buffers > printed at > time 15.4 seconds you could add this to the setup code or model > definition: > > (schedule-event 15.4 (lambda () (buffer-chunk goal retrieval))) > > Since that is presumably for debugging purposes you probably also want to > specify the :maintenance flag on the event as true: > > (schedule-event 15.4 (lambda () (buffer-chunk goal retrieval)) > :maintenance t) > > That will cause the model to ignore that event. If you don't specify > that > then the model could behave differently when you have that event > scheduled > relative to when you don't and that's usually not a good thing. > > Dan > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From troy.kelley at us.army.mil Tue Dec 21 09:52:11 2010 From: troy.kelley at us.army.mil (Kelley, Troy (Civ,ARL/HRED)) Date: Tue, 21 Dec 2010 09:52:11 -0500 Subject: [ACT-R-users] Perfect Episodic Memory References: <88C51F30-524D-40A9-8A5E-305400630E73@cmu.edu> Message-ID: <2D30123DFDFF1046B3A9CF64B6D9AC90DE29EF@ARLABML03.DS.ARL.ARMY.MIL> This past weekend on 60 minutes there was a story on perfect episodic memory. Unfortunately, the researcher who discovered this is calling it "superior autobiographical memory" which is unfortunate because it is clearly a discovery of memory for episodic events. Anyway, after MRI scans of these people, it was found that they had larger than normal temporal lobes and a larger caudate nucleus - which appears to indicate that perhaps "normal" people are limited by brain capacity to remember every episode in their lives. There also appears to be a relationship between perfect episodic memory and OCD tendencies since the caudate nucleus appears to be connected to OCD. Anyway, interesting story for any memory researcher. You can watch 60 minutes on the web here. http://www.cbsnews.com/video/watch/?id=7166313n&tag=mg;mostpopvideo Happy Holidays Troy Kelley ARL From venkateshrao1976 at gmail.com Tue Dec 21 16:44:09 2010 From: venkateshrao1976 at gmail.com (Venkateswara Rao) Date: Tue, 21 Dec 2010 16:44:09 -0500 Subject: [ACT-R-users] IICAI-11 Call for papers Message-ID: *IICAI-11 Call for papers* The 5th Indian International Conference on Artificial Intelligence (IICAI-11) will be held during December 14-16, 2011 in Tumkur (near Bangalore), India. Website: http://www.iiconference.org. We invite draft paper submissions and session proposals. IICAI is a series of high quality technical events in Artificial Intelligence (AI) and is also one of the major AI events in the world. The primary goal of the conference is to promote research and developmental activities in AI and related fields in India and the rest of the world. Another goal is to promote scientific information interchange between AI researchers, developers, engineers, students, and practitioners working in India and abroad. The conference will be held every two years to make it an ideal platform for people to share views and experiences in AI and related areas. Sincerely Venkateswara rao Organizing Committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From cl at cmu.edu Wed Dec 22 11:55:42 2010 From: cl at cmu.edu (Christian Lebiere) Date: Wed, 22 Dec 2010 11:55:42 -0500 Subject: [ACT-R-users] Neuro-Cognitive Systems Engineer Message-ID: Join our Center for Neural and Emergent Systems in beautiful Malibu, California. (http://www.hrl.com/laboratories/cnes/cnes_main.html) We are looking for postdocs and interns. ***SPECIAL REQUIREMENTS: MUST BE A U.S. PERSON ? Education Desired: PhD in Computational Neuroscience, Cognitive Psychology or related field, or Computer Science / Electrical Engineering with solid exposure to brain function and neural processing. ? Essential Job Functions: Study, develop, and implement computational models of brain regions, and integrate regions together to model cognition.Other tasks will include the development, simulation, evaluation, and implementation of neuromorphic algorithms for a variety of applications. ? Experience: Research experience in the following areas: building neuro-inspired cognitive systems using neural network modeling, neurobiological modeling of cognition, sensory processing, motor control, machine learning. ? Knowledge Required: Background in neural networks, brain modeling and simulation, computational neuroscience, systems neuroscience; strong programming skills, proficient with C, C++, JAVA, and/or Matlab. Highly desired: Proficiency with Emergent neural network simulation system, ACT-R, and lisp; skills in parallel computing. ? Essential Physical/ Mental Requirements: Good communication (verbal and written) skills, ability to independently perform research and modeling, and active participation in R & D team activities. **Special Requirements: U.S. citizen or permanent resident status required. If interested please send your resume to: rajan at hrl.com or visit our website at: www.hrl.com and apply online. From jazzka at gmail.com Thu Dec 23 07:47:56 2010 From: jazzka at gmail.com (Hector _) Date: Thu, 23 Dec 2010 13:47:56 +0100 Subject: [ACT-R-users] Custom production firing time Message-ID: Hi everybody! I have a small question... is it possible to define a custom firing time for a given production? I would like my production X to take 2 seconds to run. How could I do that if possible? Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From thalvers at cs.uoregon.edu Thu Dec 23 12:40:03 2010 From: thalvers at cs.uoregon.edu (Tim Halverson) Date: Thu, 23 Dec 2010 10:40:03 -0700 Subject: [ACT-R-users] Custom production firing time In-Reply-To: References: Message-ID: Hector, I believe you what you need to do is change the "action time" parameter for that production. For example, (spp X :at 2) where "X" is the production name. On Dec 23, 2010, at 5:47 AM, Hector _ wrote: > Hi everybody! > > I have a small question... is it possible to define a custom firing time for a given production? I would like my production X to take 2 seconds to run. How could I do that if possible? > > Thank you! > _______________________________________________ > ACT-R-users mailing list > ACT-R-users at act-r.psy.cmu.edu > http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users