From tkelley at arl.army.mil Mon Feb 5 09:13:38 2007 From: tkelley at arl.army.mil (Kelley, Troy (Civ,ARL/HRED)) Date: Mon, 5 Feb 2007 09:13:38 -0500 Subject: [ACT-R-users] BRIMS 2007 Announces Four Tutorials (UNCLASSIFIED) Message-ID: <2D30123DFDFF1046B3A9CF64B6D9AC9005C924@ARLABML03.DS.ARL.ARMY.MIL> Classification: UNCLASSIFIED Caveats: NONE Behavior Representation in Modeling in Simulation Conference BRIMS 2007 Announces Four Tutorials Monday 26 March 2007 Norfolk, VA Please go to www.sisostds.org and select BRIMS for more information. Register online now for BRIMS 2007. Early registration ends 21 Feb 2007. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Tutorial # 1 - Monday, March 26, 8:00 AM to 12:00 Noon Cognitive Crash Dummies: Predictive Human Performance Modeling for Interactive System Design Bonnie E. John, Ph.D. and Richard L. Lewis, Ph.D. Crash dummies in the auto industry save lives by testing the physical safety of automobiles before they are brought to market. "Cognitive crash dummies" save time, money, and potentially even lives, by allowing computer-based system designers to test their design ideas before implementing those ideas in products and processes. This tutorial will give the participants hands-on experience building predictive models of human performance using a new tool called CogTool, developed at Carnegie Mellon University, University of Michigan, and NASA Ames Research Center. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Tutorial # 2 - Monday, March 26, 8:00 AM to 12:00 Noon Examining Subject Variability Using Human Performance Modeling Anna Fowles-Winkler; Andy Belyavin, Ph.D; and Brad Cain This tutorial will give participants an overview of the fundamentals of computer simulation and task network modeling. Building on those fundamentals, we will then explore techniques for modeling human subject variability. While the techniques presented can be used with any general purpose modeling application, this tutorial will use the Integrated Performance Modelling Environment (IPME). ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Tutorial # 3 - Monday, March 26, 1:00-5:00 PM User Interface Evaluation and Development using the GRaph Based Interface Language (GRBIL) Tool Rick Archer and Michael Matessa, Ph.D. This tutorial describes and demonstrates the use of a software application called the GRaph-Based Interface Language (GRBIL) tool that allows developers of future weapon systems, decision aids, and other support tools that will be critical to operators of the system to evaluate the efficiency and effectiveness of an interface design by simply sketching out the interface graphically. From that graphical description of the interface, GRBIL automatically generates an Adaptive Control of Thought - Rational (ACT-R) cognitive model of the user interacting with the system. Using GRBIL, a designer can also define any number of interface tasks that can be executed many times by the ACT-R models. Because the GRBIL tool is built on top of a standard interface toolkit, the resulting interface description can then be used directly by the implementers as the basis for the actual system rather than being a throwaway product of a separate evaluation phase. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Tutorial # 4 - Monday, March 26, 1:00-5:00 PM Human Behavior Models and Performance Moderator Functions (PMFs) for Realistic Leader, Follower, & Faction Sims Barry G. Silverman, PhD This tutorial will begin with a discussion of synthetic bots, avatars, and agents, including their usage to assist, train, and entertain people in simulators and in videogames. We will jointly examine the state of the practice in agent simulation and explore the need for greater realism in behavioral dimensions. What makes an agent believable? rational? emotionally appealing? entertaining? Humans are not perfectly rational agents, nor are they tireless automatons. In order to enhance the realism of synthetic avatars, one needs to include ways to model a potentially wide array of performance moderators such as heat/noise, injury, energy/fatigue, boredom and inattention, peer pressure, stress/panic, emotions, perceptual and cognitive errors, reputation, trust, relationships, and the like. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Laurel Allender & Troy Kelley, BRIMS 2007 Co-Chairs Classification: UNCLASSIFIED Caveats: NONE To reply: BRIMS-EXEC-PVT.48064 at discussions.sisostds.org To start a new topic: BRIMS-EXEC-PVT at discussions.sisostds.org To view discussion: http://discussions.sisostds.org/default.asp?boardid=2&action=9&read=4146 4&fid=52 To (un)subscribe: Send a message to BRIMS-EXEC-PVT.list-request at discussions.sisostds.org with the word unsubscribe in the message body. Important: the unsubscribe email needs to be in plain text format, and needs to have no subject line. SISO: http://www.sisostds.org/ Classification: UNCLASSIFIED Caveats: NONE From devipsr at gmail.com Sat Feb 3 07:33:06 2007 From: devipsr at gmail.com (Devip Sr) Date: Sat, 3 Feb 2007 07:33:06 -0500 Subject: [ACT-R-users] Paper submission deadline extended Message-ID: <6530578c0702030433p6ccb42c5l51a7330fea32436@mail.gmail.com> *NOTE:* The deadline for draft paper submission is extended until February 12 2007, due to requests from many authors. www.PromoteResearch.org The 2007 International Multi-Conference in Computer Science, Engineering, and Information Science will be held during 9-12 of July 2007 in Orlando, FL, USA. The multi-conference consists of four major events namely *International Conference on Artificial Intelligence and Pattern Recognition (AIPR-07) International Conference on Enterprise Information Systems and Web Technologies (EISWT-07)** **International Conference on High Performance Computing, Networking and Communication Systems (HPCNCS-07)** **International Conference on Software Engineering Theory and Practice (SETP-07)*** All these events will be held simultaneously at the same place. Click on www.PromoteResearch.org for more information. Sincerely DevipSr Publicity committee member -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcrgolovine at aol.com Tue Feb 6 12:37:02 2007 From: jcrgolovine at aol.com (Jean-Claude Golovine) Date: Tue, 6 Feb 2007 17:37:02 -0000 Subject: [ACT-R-users] Advice Message-ID: Hello, I am having a bit of a hard time at the moment. I have been invited to undertake a research project that includes ACT-R and write a toolkit to perform cognitive operations on UIs. The platform is Java based which means I have to communicate with ACT-R (Lisp) through sockets (maybe). However, I have an open mind on the matter and my question to you is: what is the best way to interface ACT-R from another platform? I have read about some work that uses this approach (TCP/IP datagram's) but I cannot find anything substantial to get me going. Can you help? Best Regards Jean-Claude Jean-Claude Golovine Research Student School of Computing The Robert Gordon University St. Andrew Street Aberdeen AB25 1HG Tel: 01224 262575 http://www.comp.rgu.ac.uk/docs/info/staff.php?name=jcg -------------- next part -------------- An HTML attachment was scrubbed... URL: From bej at cs.cmu.edu Tue Feb 6 16:12:41 2007 From: bej at cs.cmu.edu (Bonnie John) Date: Tue, 06 Feb 2007 13:12:41 -0800 Subject: [ACT-R-users] Advice In-Reply-To: References: Message-ID: <45C8EF49.4060100@cs.cmu.edu> You should look in to CogTool, because I believe we have already done a lot of the work you are contemplating. CogTool is available under an open source license. Start by reading our website and then, if you are interested, correspond with us about getting the code. Website: http://www.cs.cmu.edu/~bej/cogtool/ email: cogtool at cs.cmu.edu Bonnie John Jean-Claude Golovine wrote: > > Hello, > > I am having a bit of a hard time at the moment. I have been invited to > undertake a research project that includes ACT-R and write a toolkit > to perform cognitive operations on UIs. The platform is Java based > which means I have to communicate with ACT-R (Lisp) through sockets > (maybe). However, I have an open mind on the matter and my question to > you is: what is the best way to interface ACT-R from another platform? > I have read about some work that uses this approach (TCP/IP > datagram?s) but I cannot find anything substantial to get me going. > Can you help? > > Best Regards > > Jean-Claude > > Jean-Claude Golovine > Research Student > School of Computing > The Robert Gordon University > St. Andrew Street > Aberdeen AB25 1HG > > Tel: 01224 262575 > http://www.comp.rgu.ac.uk/docs/info/staff.php?name=jcg > > ------------------------------------------------------------------------ > > _______________________________________________ > ACT-R-users mailing list > ACT-R-users at act-r.psy.cmu.edu > http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users > From db30 at andrew.cmu.edu Tue Feb 6 17:19:02 2007 From: db30 at andrew.cmu.edu (Dan Bothell) Date: Tue, 06 Feb 2007 17:19:02 -0500 Subject: [ACT-R-users] Advice In-Reply-To: References: Message-ID: --On Tuesday, February 06, 2007 5:37 PM +0000 Jean-Claude Golovine wrote: > > > Hello, > > I am having a bit of a hard time at the moment. I have been invited to > undertake a research project that includes ACT-R and write a toolkit to > perform cognitive operations on UIs. The platform is Java based which > means I have to communicate with ACT-R (Lisp) through sockets (maybe). > However, I have an open mind on the matter and my question to you is: > what is the best way to interface ACT-R from another platform? I have > read about some work that uses this approach (TCP/IP datagram's) but I > cannot find anything substantial to get me going. Can you help? > > I would say that there is no universal "best way" because what's best for any particular situation depends on a lot of factors. I've seen lots of different mechanisms used which all seemed "best" for the situation. Some example mechanisms would be: through socket connections (either TCP or UDP), by using the Lisp's foreign function interface to interact directly with the code of the other system, or through a file-based read/write system, and I'm sure there are other means people have used as well. One thing to consider is whether you really need to connect to the other system or if it's easier to use something like CogTool (as suggested in another message by Bonnie John), a system like SegMan (work by Robert St. Amant) or to mock up an interface within Lisp using either the native Lisp tools or the basic GUI tools provided by ACT-R (which are somewhat limited but already fully supported for models). If you do decide to hook the model up to the external system, then there are really two pieces to the design. One is how you represent things for the model, and the other being how you get that interaction into and out of ACT-R. In terms of hooking things up to the model the general mechanism is to create what's called a "Device" for ACT-R. This provides the visual features to the model and receives the motor actions from the manual system which it can send to the other system. For the very basic information on the device you can look at the reference manual available with the newest version of ACT-R 6. However, that doesn't yet contain all the detailed information you'll need to actually implement one, so you'll also need to look at the docs for ACT-R/PM which can be found at: Then you need to consider how you hook the device up to the external system, and some of the things to consider would be: - Does the other system already have an external interface (sockets or otherwise)? - Do you have the ability to modify the other system or is it a fixed application? - What Lisp software are you using, and how comfortable are you with its sockets/streams/foreign function interface/multi-threading capabilities? - What kinds of interaction do you need - the basic mouse/keyboard operations built into the ACT-R motor module or more advanced things like dragging a mouse to select items and double-clicking? Depending on what you've got and what you need there are lots of options and what's "best" depends a lot on the situation. Finally, here are some notes about other things provided with ACT-R which could be of use in hooking up an external system: - If you're using ACL w/IDE on Windows the default devices for ACT-R will generate actual system level mouse/keyboard actions which can be detected by any application. So, you may be able to use those to control the external system and only need to get the "visual" information into the model. - The ACT-R environment GUI is connected to the ACT-R system via a TCP socket. Thus, there is support built in for using TCP stream sockets (but not UDP datagrams). Sorry, I don't have a simple "here's how to do it" answer for you, but hopefully that provides you with some resources to consult and things to consider. If you have any specific questions about the ACT-R components, feel free to ask me about them. Dan From ritter at ist.psu.edu Tue Feb 6 17:00:50 2007 From: ritter at ist.psu.edu (Frank Ritter) Date: Tue, 6 Feb 2007 17:00:50 -0500 Subject: [ACT-R-users] Advice In-Reply-To: References: Message-ID: a couple of thoughts: 1 if you have to interact with a particular interface, and have access to it, you can instrument it using the approach in http://acs.ist.psu.edu/papers/ritterBJY00.pdf 2 if you have not access to the interface, but have to use it exactly, then look at SegMan e.g., http://acs.ist.psu.edu/papers/ritterVRSASip.pdf and segman at http://www.csc.ncsu.edu/faculty/stamant/segman-introduction.html 3 if you have a particular interface, but can duplicate it easily, consider ACT-R/PM (built into 6), and/or using cogtool to implement it and the model While in the UK, you should visit UCL, where Richard Young is, and he often can provide some help to such projects. best of luck, Frank ps. if you are working in case 1 or 2, you may also want to use RUI to record user's actions http://ritter.ist.psu.edu/projects/RUI/ At 17:37 +0000 06/02/2007, Jean-Claude Golovine wrote: >Hello, > >I am having a bit of a hard time at the moment. I have been invited >to undertake a research project that includes ACT-R and write a >toolkit to perform cognitive operations on UIs. The platform is Java >based which means I have to communicate with ACT-R (Lisp) through >sockets (maybe). However, I have an open mind on the matter and my >question to you is: what is the best way to interface ACT-R from >another platform? I have read about some work that uses this >approach (TCP/IP datagram's) but I cannot find anything substantial >to get me going. Can you help? > >Best Regards > >Jean-Claude > >Jean-Claude Golovine >Research Student >School of Computing >The Robert Gordon University >St. Andrew Street >Aberdeen AB25 1HG > >Tel: 01224 262575 >http://www.comp.rgu.ac.uk/docs/info/staff.php?name=jcg > > > > >_______________________________________________ >ACT-R-users mailing list >ACT-R-users at act-r.psy.cmu.edu >http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From marc.halbruegge at unibw.de Wed Feb 7 10:15:09 2007 From: marc.halbruegge at unibw.de (Marc Halbruegge) Date: Wed, 07 Feb 2007 16:15:09 +0100 Subject: [ACT-R-users] Advice In-Reply-To: References: Message-ID: <45C9ECFD.9060206@unibw.de> Hi! > I am having a bit of a hard time at the moment. I have been invited to > undertake a research project that includes ACT-R and write a toolkit to > perform cognitive operations on UIs. The platform is Java based which > means I have to communicate with ACT-R (Lisp) through sockets (maybe). > However, I have an open mind on the matter and my question to you is: > what is the best way to interface ACT-R from another platform? I have > read about some work that uses this approach (TCP/IP datagram?s) but I > cannot find anything substantial to get me going. Can you help? I think it's a matter of individual preferences and programming style. If you want to interface C to Lisp, I can recommend using CFFI on the Lisp side (http://common-lisp.net/project/cffi/) and SWIG to generate the wrappers (http://www.swig.org/). From C to Java is quite simple using the Java Native Interface. Greetings Marc > > > > Best Regards > > > > Jean-Claude > > > > Jean-Claude Golovine > Research Student > School of Computing > The Robert Gordon University > St. Andrew Street > Aberdeen AB25 1HG > > > > Tel: 01224 262575 > http://www.comp.rgu.ac.uk/docs/info/staff.php?name=jcg > > > > > > > > > ------------------------------------------------------------------------ > > _______________________________________________ > ACT-R-users mailing list > ACT-R-users at act-r.psy.cmu.edu > http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users -- Dipl.-Psych. Marc Halbruegge Human Factors Institute Faculty of Aerospace Engineering Bundeswehr University Munich Werner-Heisenberg-Weg 39 D-85579 Neubiberg Phone: +49 89 6004 3497 Fax: +49 89 6004 2564 E-Mail: marc.halbruegge at unibw.de -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: From grayw at rpi.edu Wed Feb 7 18:24:50 2007 From: grayw at rpi.edu (Wayne Gray) Date: Wed, 7 Feb 2007 18:24:50 -0500 Subject: [ACT-R-users] ACT Antibody At GenWay In-Reply-To: <11708429130e895ac31b1a435c7b20d49a3e9e00c8@genwayinfo.com> References: <11708429130e895ac31b1a435c7b20d49a3e9e00c8@genwayinfo.com> Message-ID: This is reassuring news. I am sure that the rest of you will be as relieved to hear this as I was!! Wayne On Feb 7, 2007, at 14:48, Mazy Marjani, M.Sc. wrote: > Dear Dr. Gray, > > > From your article titled ?The soft constraints hypothesis: a > rational analysis approach to resource allocation for interactive > behavior.? (Psychol Rev. 2006 Jul;113(3):461-82.), we learned of > your research with ACT and thought you might be interested in > knowing that GenWay currently has an antibody against this target > as part of our catalog products. > > > Here?s a link to our website to view the datasheet: ACT Antibody > > > GenWay is the world?s leader in avian (IgY) antibodies and their > applications. We also specialize in recombinant protein expression > and purification. GenWay has an online catalog of over 10,000 > products, including proteins, antibodies and ELISA kits, which have > been categorized by disease, function and tissue. We also offer > custom services in the following areas: > > Polyclonal IgY Antibody Service > Polyclonal IgG Antibody Service > Monoclonal IgY Antibody Service > Monoclonal IgG Antibody Service > Recombinant Protein Expression and Purification Service > Thank you in advance for your interest and consideration. I look > forward to any questions or concerns you might have regarding any > of our products or services. > > > Best regards, > > > Mazy Marjani, M.Sc. > International Sales Manager > GenWay Biotech, Inc. > 6777 Nancy Ridge Drive > San Diego, CA 92121 > Phone: 858-458-0866 ext. 122 > Fax: 858-458-0833 > Email: mmarjani at genwaybio.com > Webpage: http://www.genwaybio.com > > > > > > Please click here to remove your email from our mailing list > **Rensselaer**Rensselaer**Rensselaer**Rensselaer**Rensselaer** Wayne D. Gray; Professor of Cognitive Science Rensselaer Polytechnic Institute Carnegie Building (rm 108) ;;for all surface mail & deliveries 110 8th St.; Troy, NY 12180 EMAIL: grayw at rpi.edu, Office: 518-276-3315, Fax: 518-276-3017 for general information see: http://www.rpi.edu/~grayw/ for On-Line publications see: http://www.rpi.edu/~grayw/pubs/ downloadable_pubs.htm for the CogWorks Lab see: http://www.cogsci.rpi.edu/cogworks/ If you just have formalisms or a model you are doing "operations research" or" AI", if you just have data and a good study you are doing "experimental psychology", and if you just have ideas you are doing "philosophy" -- it takes all three to do cognitive science. **Rensselaer**Rensselaer**Rensselaer**Rensselaer**Rensselaer** -------------- next part -------------- An HTML attachment was scrubbed... URL: From suley at osir.hihm.no Thu Feb 8 13:13:25 2007 From: suley at osir.hihm.no (Sule Yildirim) Date: Thu, 8 Feb 2007 19:13:25 +0100 Subject: [ACT-R-users] ECM_MAI 07 Message-ID: <037901c74bac$d9d19e40$e116249e@SULE> ECM_MAI 2007 EXTENDING COMPUTATIONAL COGNITIVE MODELLING TO ISSUES OF MULTI-AGENT INTERACTION in conjunction with ICAI'07, Las Vegas, Nevada, USA (June 25-28, 2007) Call for Papers The fields of cognitive science, artificial intelligence, and the social sciences can together create a better understanding of cognition and social processes than they can independently of each other. Some efforts in social sciences, especially social simulations, have evolved an understanding of social processes in societies independently from the other two fields. However Sun (2006) among other has shown that there is relation between individual cognition and social processes. Can this connection be understood by exploring the relationship between computational cognitive modeling and social simulation? How is the cognition of a society different from individual cognition? In this session we will examine these questions and many more. We invite submissions of research that investigate any of the following topics: (1) the relation between individual and social cognition, (2) multi-agent models that are related to advances in individual cognition, (3) social simulations that utilize cognitive science or artificial intelligence techniques, (4) aspects of social cognition that require modeling, and (5) the integration of cognitive models of learning into multi-agent interaction. (Sun, R., 2006) Cognition and Multi-Agent Interaction, Edited by Ron Sun, 2006, Cambridge University Press. Submission of Papers Authors are invited to submit their draft paper (between 5 to 7 pages - single space, font size of 10 to 12) to Sule Yildirim (suley at osir.hihm.no) or Bill Rand (wrand at northwestern.edu) before the deadline. E-mail submissions in MS document or PDF formats are preferable. All reasonable typesetting formats are acceptable. These are general guidelines, if you would like to submit in a different format please contact the coordinators. Papers must not have been previously published or current submitted for publication elsewhere. Upon acceptance authors will be asked to follow the IEEE style format in preparing their papers for publication (max. 7 pages). The first page of the submission should include: -Title of the paper -Name, affiliation, postal address, e-mail address, telephone number and fax number for each author -Name of the author who will present the paper should be indicated -A maximum of five keywords Session Format The session will begin with an overview of the topic by the coordinators, and the peer-reviewed papers will then be orally presented. If time permits a wrap-up discussion will take place at the end of the session. Important Dates Submission deadline: 1 March, 2007. Notification of acceptance: 20 March, 2007. Camera-ready due: 20 April, 2007. Conference registration deadline: 20 April, 2007 See ICAI'07 website for complete details. Program Committee Session Chairs a.. Sule Yildirim, Hedmark University College b.. Bill Rand, Northwestern University Program Committee ? Ron Sun, Rensselaer Polytechnic Institute ? Tuncer ?ren, University of Ottawa ? Maarten Sierhuis, RIACS/NASA Ames Research Center ? Angelo Cangelosi, University of Plymouth ? Agnar Aamodt, Norwegian University of Science and Technology ? Peter De Souza, Hedmark University College ? Ashwin Ram, Georgia Tech Rev Date: Feb 08, 2007 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Final Session Call - ICAI'07.pdf Type: application/pdf Size: 34794 bytes Desc: not available URL: From marc.halbruegge at unibw.de Fri Feb 9 03:45:08 2007 From: marc.halbruegge at unibw.de (Marc Halbruegge) Date: Fri, 09 Feb 2007 09:45:08 +0100 Subject: [ACT-R-users] Visual Perception in ACT-R Message-ID: <45CC3494.9010509@unibw.de> Hi! I've a question concerning the feature scale of ACT-Rs vision. In Anderson et al. (1997, full citation below) on page 446, there's an H presented that is made up of Xs, like this: X X X X XXXXX X X X X The caption of the figure says "ACT-R can see either the H or the Xs comprising the H, depending on how ACT-R sets its feature scale" Now my questions: - Is there an ACT-R model available that demonstrates this ability? - How do I set the feature scale (not 'word' vs. 'letter' but 'H' vs. 'X')? - Or is this not yet implemented? (in the text, the claim is less strong: "... we would want [ACT-R] to recognize either the H or the Xs comprising the H.") Thanks for your answers! Greetings Marc Halbruegge Reference: Anderson, J. R., Matessa, M., & Lebiere, C. (1997). ACT-R: A Theory of Higher Level Cognition and Its Relation to Visual Attention. Human-Computer Interaction, 12, 439-462. -- Dipl.-Psych. Marc Halbruegge Human Factors Institute Faculty of Aerospace Engineering Bundeswehr University Munich Werner-Heisenberg-Weg 39 D-85579 Neubiberg Phone: +49 89 6004 3497 Fax: +49 89 6004 2564 E-Mail: marc.halbruegge at unibw.de -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: From db30 at andrew.cmu.edu Fri Feb 9 09:15:13 2007 From: db30 at andrew.cmu.edu (Dan Bothell) Date: Fri, 09 Feb 2007 09:15:13 -0500 Subject: [ACT-R-users] Visual Perception in ACT-R In-Reply-To: <45CC3494.9010509@unibw.de> References: <45CC3494.9010509@unibw.de> Message-ID: <2095F628AEE3FBAEC2F44638@tadpole.psy.cmu.edu> --On Friday, February 9, 2007 9:45 AM +0100 Marc Halbruegge wrote: > Hi! > > I've a question concerning the feature scale of ACT-Rs vision. > In Anderson et al. (1997, full citation below) on page 446, there's an H > presented that is made up of Xs, like this: > > X X > X X > XXXXX > X X > X X > > The caption of the figure says "ACT-R can see either the H or the Xs > comprising the H, depending on how ACT-R sets its feature scale" > > Now my questions: > - Is there an ACT-R model available that demonstrates this ability? > - How do I set the feature scale (not 'word' vs. 'letter' but > 'H' vs. 'X')? > - Or is this not yet implemented? (in the text, the claim is less > strong: "... we would want [ACT-R] to recognize either the H or the Xs > comprising the H.") > > Thanks for your answers! > That paper is describing a visual attention system which is the predecessor of the current implementation, and I don't really know the details as to how one would have done such a thing (if possible) with that vision system. There is no mechanism in the ACT-R 6 vision module which would allow the model to "see" an H automatically from those individual X's. However, it is possible to create your own features for the model's visicon and thus you could add an H to the set of features available to the model. If you gave it a value for its kind slot which was different from the normal letters (which are of kind text) then you could just specify that when making the visual-location request. So, to find an X you would use something like this (along with any other constraints as needed): +visual-location> isa visual-location kind text ... and to find the H you would need to specify the new kind: +visual-location> isa visual-location kind large-text ... Hope that helps, and if you have questions about how to augment the visicon let me know. Dan From tkelley at arl.army.mil Fri Feb 9 09:46:22 2007 From: tkelley at arl.army.mil (Kelley, Troy (Civ,ARL/HRED)) Date: Fri, 9 Feb 2007 09:46:22 -0500 Subject: [ACT-R-users] Summer position available (UNCLASSIFIED) In-Reply-To: <2095F628AEE3FBAEC2F44638@tadpole.psy.cmu.edu> References: <45CC3494.9010509@unibw.de> <2095F628AEE3FBAEC2F44638@tadpole.psy.cmu.edu> Message-ID: <2D30123DFDFF1046B3A9CF64B6D9AC9005C95A@ARLABML03.DS.ARL.ARMY.MIL> Classification: UNCLASSIFIED Caveats: NONE Hello ACT-R and Soar users, We have an opportunity for an intern to work here at the Army Research Laboratory this summer in Aberdeen, MD. We need a good C++ programmer, but someone familiar with ACT-R would be helpful as well. The job involves developing a robotics control architecture, called SS-RICS, which is largely based on ACT-R. The job is full time over the summer here at the lab in Aberdeen, MD and the pay is around 13 dollars an hour. We can only accept U.S. citizens. You must be willing to undergo a background check. Also, you must be considered a student while you are working at the lab, so you cannot have already graduated. Undergraduate students and graduate students are both welcome. Additionally, I would be willing to help the applicant find summer housing in the area. Please contact me if you are interested in this Cognitive Robotics internship. Troy D. Kelley US Army Research Laboratory Human Research and Engineering Directorate AMSRD-ARL-HR-SE Aberdeen, MD, 21005-5425 Ph: 410-278-5869 FAX: 410-278-9523 Classification: UNCLASSIFIED Caveats: NONE From 2006conf at gmail.com Wed Feb 14 01:56:44 2007 From: 2006conf at gmail.com (bai ye) Date: Wed, 14 Feb 2007 14:56:44 +0800 Subject: [ACT-R-users] Call for Papers, Special Session Proposals & Sponsorship: ICNC'07-FSKD'07 Message-ID: ** Our apologies if you receive multiple copies of this announcement * ---------------------------------------------------------------------- The 3rd International Conference on Natural Computation (ICNC'07) The 4th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD'07) ---------------------------------------------------------------------- 24 - 27 August 2007, Haikou, China *** Submission Deadline: 15 March 2007 *** ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ http://www.hainu.edu.cn/htm/icnc-fskd2007 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Call for Papers & Special Session Proposals The joint ICNC'07-FSKD'07 will be held in Haikou, China. Haikou, the capital city of Hainan Province, is a pleasant modern city with a number of historical and cultural sights to see and to hold you for a few days before heading off to Hainan's beautiful beaches and inland villages. ICNC'07-FSKD'07 aims to provide an international forum for scientists and researchers to present the state of the art of intelligent methods inspired from nature, including biological, linguistic, ecological, and physical systems, with applications to data mining, manufacturing, design, reliability, and more. It is an exciting and emerging inter- disciplinary area in which a wide range of techniques and methods are being studied for dealing with large, complex, and dynamic problems. Previously, the joint conferences in 2005 and 2006 each attracted over 3100 submissions from more than 30 countries. All accepted papers will appear in conference proceedings published by the IEEE and will be indexed by both EI (Compendex) and ISTP. Furthermore, extended versions of many good papers will be published in special issues of journals, Lecture Notes in Computer Science (LNCS) and Lecture Notes in Artificial Intelligence (LNAI), and will be indexed in SCI-Expanded. In addition to regular sessions, participants are encouraged to organize special sessions on specialized topics. Each special session should have at least 4 papers. Special session organizers will solicit submissions, conduct reviews and recommend accept/reject decisions on the submitted papers. For more information, visit the conference web page or email the secretariat at nc2007 at hainu.edu.cn Join us at this major event in scenic Hainan !!! From bhanupvsr at gmail.com Thu Feb 22 13:05:15 2007 From: bhanupvsr at gmail.com (Bhanu Prasad) Date: Thu, 22 Feb 2007 13:05:15 -0500 Subject: [ACT-R-users] IICAI-07 Call for papers Message-ID: <621812f80702221005w4de8caccq721c3eeeececa719@mail.gmail.com> www.iiconference.org The *3rd Indian International Conference on Artificial Intelligence (IICAI-07)* (website: http://www.iiconference.org ) will be held in Pune, INDIA during December 17-19 2007. IICAI-07 is one of the major AI events in the world. This conference focuses on all areas of AI and related fields. We invite paper submissions. Please visit on the conference website for more details. Bhanu Prasad IICAI-07 Chair Department of Computer and Information Sciences Florida A&M University, Tallahassee, FL 32307, USA Email: bhanupvsr at gmail.com Phone: 850-412-7350 -------------- next part -------------- An HTML attachment was scrubbed... URL: