[ACT-R-users] ACT-R and Visual attention problem

Marc Halbrügge marc.halbruegge at gmx.de
Wed Aug 18 15:32:15 EDT 2010


Hi,

just for completeness, here's another way to input arbitrary pictures
(from a webcam for example) into ACT-R:

http://act-cv.sourceforge.net/

You'd need some background in computer vision and c++, though

Best,
Marc

Am Mittwoch, den 18.08.2010, 10:24 -0400 schrieb Frank Ritter:
> At 09:53 -0400 18/8/10, <db30 at andrew.cmu.edu> wrote:
> >--On Wednesday, August 18, 2010 3:42 PM +0900
> >"-chang hyun" <y3rr0r at etri.re.kr> wrote:
> >  > Dear all,
> >>
> >>  I have met the ACT-R recently because of my project.
> >>  A partial goal of the project is to find an attention region when an
> >>  arbitrary picture(such as street, subway, school, etc) is presented to a
> >>  monitor.
> >>
> >>  My question is...
> >>  1) Can the act-r be inputted a picture?
> >
> >No, there are no mechanisms built into ACT-R for processing arbitrary
> >images.
> >
> >ACT-R's interaction with the world occurs through what is called a device,
> >and it is the device which generates the features and objects which the
> >model can see.  The provided devices allow the model to see some simple
> >GUI elements (text, buttons, and lines) which are drawn using either the
> >GUI systems built into some Lisps or through a virtual GUI system built
> >into ACT-R.  The commands for using the virtual GUI are described in the
> >experiment description documents of the tutorial units.
> >
> >If one wants other input or different features then it is possible to write
> >a new device for the system to provide the visual components to a model.
> >That new device is then responsible for parsing whatever external
> >representation of the world is desired and creating the chunks which the
> >vision module will use.  Documentation on creating a new device can be found
> >in the docs directory in the presentation titled "extending-actr.ppt".
> >
> >I know that some researchers have attempted to build more general image
> >processing devices for ACT-R, but as far as I know none of those efforts
> >are currently available as working systems.
> >
> >>  2) According to tutorial 2, act-r can find an attended location. What is
> >>  the criterion for finding the attended location?
> >>  3) Can the act-r find an attended area in the inputted picture?
> >>
> >
> >Sorry, I don't really understand what you're looking for with these
> >two questions.  I suspect that given the answer to the first one they
> >are not really relevant, but if that is not the case then please
> >feel free to elaborate on what you would like to know and ask again.
> >
> >Hope that helps,
> >Dan
> 
> Further to Dan's useful comments, there are three ways this can be 
> done.  (this is reviewed in a two papers,
> http://acs.ist.psu.edu/papers/ritterBJY00.pdf  and 
> http://acs.ist.psu.edu/papers/bassbr95.pdf
>    1)  you model in ACT-R that it will or has seen something.  This is 
> somewhat like doing something in your head.
> 
>    2) you use a display built with tools included with ACT-R (the 
> former /PM tools, now included with ACT-R).  This subjects can see 
> and the model can see, but you have to duplicate the display.  Not 
> trivial, but possible.  The tutorials have examples, I believe.  If 
> you can find an example in the tutorials related to what you want to 
> do, creating a model becomes much easier.
> 
>    3) you get SegMan from Rob St. Amant, or recreate it.  Segman 
> connects ACT-R to a graphic display by parsing a bitmap.   We've used 
> it, and papers at http://acs.ist.psu.edu/papers/  that have St. Amant 
> as a co-author use it.  This would be more work, currently, but 
> ultimately, I think when combined with ACT-R/PM, more satisfying.
> 
> 
> cheers,
> 
> Frank
> _______________________________________________
> ACT-R-users mailing list
> ACT-R-users at act-r.psy.cmu.edu
> http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users






More information about the ACT-R-users mailing list