[ACT-R-users] Perception and Motor

Chipman, Susan CHIPMAS at ONR.NAVY.MIL
Thu Jul 13 10:44:25 EDT 2006


     Since no one else has yet said so, I will tell you that the perceptual and motor components of ACT-R are pretty much borrowed from the work of David Kieras and David Meyer of the University of Michigan whose EPIC architecture emphasized these and demonstrated their importance to the cognitive community.   Apparently, however, it is still the case that the ACT-R implementation is not nearly a complete copy of what is in EPIC.  Therefore, you may want to contact David Kieras as well.

       In addition you may be interested in modeling of psychological space (3-D).  Chris Schunn has had a grant from ONR focused on that capability, although the work is implemented in Java, a possible complication.  People at an Air Force lab are also working on this problem now (see schedule for ACT-R workshop).

 

Susan F. Chipman, Ph.D.

ONR Code 342

875 N. Randolph Street

Arlington, VA 22217-5660

phone:  703-696-4318

fax:  703-696-1212

 

-----Original Message-----
From: act-r-users-bounces at act-r.psy.cmu.edu [mailto:act-r-users-bounces at act-r.psy.cmu.edu] On Behalf Of Fabio Gutiyama
Sent: Thursday, July 13, 2006 9:08 AM
To: Dan Bothell; act-r-users at act-r.psy.cmu.edu
Subject: Re: [ACT-R-users] Perception and Motor

 

Thank you for your answer, it was extremely helpful.

	Also, the device that's provided for use with ACL (Allegro Common
	Lisp) under Windows does actually generate system-level mouse and
	keyboard actions.  They will be sent to whatever application has the 
	current focus.  So, if you are using that Lisp and OS combo you can
	have the model send actions to any application, but it can't "see"
	them.

 

That's interesting...
But I noticed that ACT-R sends the mouse and keyboard actions to its experiment window, independently if an external application has the focus... (I'm using ACT-R 5)... 
Could you tell me how can I define the outs to reach external applications or where I can find information about this possibility?

Thank you.
Fábio.



On 7/12/06, Dan Bothell <db30 at andrew.cmu.edu> wrote:



--On Wednesday, July 12, 2006 4:37 PM -0300 Fabio Gutiyama
<fgutiyama at gmail.com> wrote:

> Greetings, ACT-R users,
>
> I'm new in this group and I am starting to use ACT-R on a research about 
> human error in Brazil (Escola Politécnica, São Paulo University - USP).
> Some aspect I'd like to know is about the perception and motor modules.
> Is there any implementation that open possibility to ACT-R interact not 
> only with the listener or the experiment window but with the entire
> enviroment of the operational system, seeing what is presented in other
> programs and generatig keyboards' in/out(s) to windows or any program, 
> for example?
>

>From the model's perspective the world is represented by what's called
a device.  The device provides all of the ins and outs for the current
perceptual and motor modules of ACT-R.  You can find some general 
information on the device for ACT-R 6 in the framework-API.doc document
in the ACT-R 6 docs directory and more detailed information at:

<http://chil.rice.edu/projects/RPM/docs/index.html >

Note however that the web site describes the ACT-R 5 code and may
not always match with ACT-R 6 (work is currently ongoing to update
the documentation for ACT-R 6).

So, your question comes down to basically whether or not there is 
a device that supports the type of access you desire, and basically
the answer is no.  The devices included with ACT-R don't do that, but
there are a couple of possibilities.

First, it is possible to add new devices.  So, it's not impossible 
to have that type of interaction, but one would have to do the
work necessary to create the appropriate device for ACT-R.

Also, the device that's provided for use with ACL (Allegro Common
Lisp) under Windows does actually generate system-level mouse and 
keyboard actions.  They will be sent to whatever application has the
current focus.  So, if you are using that Lisp and OS combo you can
have the model send actions to any application, but it can't "see" 
them.

As for seeing, there was a project called SegMan being developed
by Robert St. Amant which was looking to provide a general image
processing system that would allow for visual information to come
from "any" window, but I don't know too many details or the current
status of that project at this time.

Hope that helps,
Dan

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/act-r-users/attachments/20060713/2ee029c2/attachment.html>
-------------- next part --------------
An embedded message was scrubbed...
From: "Chipman, Susan" <CHIPMAS at ONR.NAVY.MIL>
Subject: David Kieras
Date: Tue, 1 Sep 1998 12:08:08 -0400
Size: 2347
URL: <http://mailman.srv.cs.cmu.edu/pipermail/act-r-users/attachments/20060713/2ee029c2/attachment.mht>
-------------- next part --------------
An embedded message was scrubbed...
From: "Chipman, Susan" <CHIPMAS at ONR.NAVY.MIL>
Subject: Chris Schunn
Date: Wed, 23 Feb 2000 13:01:23 -0400
Size: 2173
URL: <http://mailman.srv.cs.cmu.edu/pipermail/act-r-users/attachments/20060713/2ee029c2/attachment-0001.mht>
-------------- next part --------------
An embedded message was scrubbed...
From: "Chipman, Susan" <CHIPMAS at ONR.NAVY.MIL>
Subject: Kevin Gluck
Date: Wed, 6 Nov 2002 13:19:41 -0400
Size: 2864
URL: <http://mailman.srv.cs.cmu.edu/pipermail/act-r-users/attachments/20060713/2ee029c2/attachment-0002.mht>


More information about the ACT-R-users mailing list