[ACT-R-users] Perception and Motor

Bonnie John bej at cs.cmu.edu
Wed Jul 12 18:25:30 EDT 2006


Fabio,

You can also use CogTool to mock-up a user interface as an interactive 
storyboard. You can then export to an ACT-R device model.

This is currently working for ACT-R 5, and will hopefully be working 
soon for ACT-R 6.

This is an unconventionial use of the mock-up part of CopTool and you 
would probably need to correspond with my group to get it to work for 
you, but we would be willing to help.

Please see the CogTool website.
http://www.cs.cmu.edu/~bej/cogtool/

Bonnie John



Dan Bothell wrote:
> --On Wednesday, July 12, 2006 4:37 PM -0300 Fabio Gutiyama 
> <fgutiyama at gmail.com> wrote:
>
>   
>> Greetings, ACT-R users,
>>
>> I'm new in this group and I am starting to use ACT-R on a research about
>> human error in Brazil (Escola Politécnica, São Paulo University - USP).
>> Some aspect I'd like to know is about the perception and motor modules.
>> Is there any implementation that open possibility to ACT-R interact not
>> only with the listener or the experiment window but with the entire
>> enviroment of the operational system, seeing what is presented in other
>> programs and generatig keyboards' in/out(s) to windows or any program,
>> for example?
>>
>>     
>
> >From the model's perspective the world is represented by what's called
> a device.  The device provides all of the ins and outs for the current
> perceptual and motor modules of ACT-R.  You can find some general
> information on the device for ACT-R 6 in the framework-API.doc document
> in the ACT-R 6 docs directory and more detailed information at:
>
> <http://chil.rice.edu/projects/RPM/docs/index.html>
>
> Note however that the web site describes the ACT-R 5 code and may
> not always match with ACT-R 6 (work is currently ongoing to update
> the documentation for ACT-R 6).
>
> So, your question comes down to basically whether or not there is
> a device that supports the type of access you desire, and basically
> the answer is no.  The devices included with ACT-R don't do that, but
> there are a couple of possibilities.
>
> First, it is possible to add new devices.  So, it's not impossible
> to have that type of interaction, but one would have to do the
> work necessary to create the appropriate device for ACT-R.
>
> Also, the device that's provided for use with ACL (Allegro Common
> Lisp) under Windows does actually generate system-level mouse and
> keyboard actions.  They will be sent to whatever application has the
> current focus.  So, if you are using that Lisp and OS combo you can
> have the model send actions to any application, but it can't "see"
> them.
>
> As for seeing, there was a project called SegMan being developed
> by Robert St. Amant which was looking to provide a general image
> processing system that would allow for visual information to come
> from "any" window, but I don't know too many details or the current
> status of that project at this time.
>
> Hope that helps,
> Dan
>
> _______________________________________________
> ACT-R-users mailing list
> ACT-R-users at act-r.psy.cmu.edu
> http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users
>   




More information about the ACT-R-users mailing list