[ACT-R-users] Vision: estimate distances and pattern extrapolation?

db30 at andrew.cmu.edu db30 at andrew.cmu.edu
Fri Mar 8 17:00:10 EST 2013


I can't help with the theoretical aspect of the issue, but attached is
an ACT-R model that can do a simplified version of that task (there are
two dots and it attends where a third would be).  The trick here (as is
often the case) is to write code to do the calculation you need.  This
model however doesn't call that with a !eval!.  Instead it attributes
that capability to the imaginal module using the imaginal-action buffer
to make the call and get the result.

Of course, there are other ways to approach that as well, and by defining
a custom device the vision module could be given additional information
that might make it easier to perform.  For example, if it could "see" the
line through the points or attend the space between them that might
provide a better approach to determining the other point.

Hope that helps,
Dan

--On Friday, March 08, 2013 2:54 PM -0500 David Reitter <reitter at psu.edu> wrote:

> All,
>
> I wonder if you have some ideas on models that could describe pattern
> recognition or implicit distance estimation.
>
> I am looking at an experiment that requires subjects to estimate the
> difference between two or more visible dots and extrapolate along line
> between them to foveate on a spot.   Alternatively, one could think of it as
> pre-attentive processing, recognizing the dots and extrapolating the pattern
> in one direction (and foveate on that spot):
>
>
>
> 	. 		.		.		X
>
>
> (Dots . are shown, and X is where I want to foveate, without anything being
> shown there.)
>
>
> It seems that the standard vision module does not give me the angle or
> distance between two screen locations (or finsts), although I could of course
> calculate that if I had the coordinates.  The precision of the estimates is
> unclear, though.  Referring to the ACT-R 6 manual, I don't see how I would
> get coordinates or estimate distance.
>
> As for the saccadic movement, EMMA would be a good reference point: "Given a
> saccade to a particular object, the model assumes that the landing point
> follows a Gaussian distribution around the center of the object. (...)"
> (Salvucci, 2001)  - Is this assumption still state of the art?
>
> (I don't care much for timing in my model.)
>
> There are models of many visual tasks out there (reading, object
> recognition/WHAT system, eye-movement), but what models explain aspects of
> pattern recognition or at least distance estimation?
>
> Thanks for your input.
>
>
> ====
>
> Some related literature:
>
> Halverson, An "Active Vision" Computational Model Of Visual Search
> http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA518836
>
> Oleksiak et al, Distance Estimation Is Influenced by Encoding Conditions
> http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2847905/
>
> Salvucci, An integrated model of eye movements and visual encoding [EMMA]
> http://www.sciencedirect.com/science/article/pii/S1389041700000152
> [and various preceding work listed there,
>
>
>
> --
> Dr. David Reitter
> Assistant Professor of Information Sciences and Technology
> Penn State University
> http://www.david-reitter.com
>
>
>
> _______________________________________________
> ACT-R-users mailing list
> ACT-R-users at act-r.psy.cmu.edu
> https://mailman.srv.cs.cmu.edu/mailman/listinfo/act-r-users



-------------- next part --------------
A non-text attachment was scrubbed...
Name: find-p3.lisp
Type: application/octet-stream
Size: 2381 bytes
Desc: not available
URL: <http://mailman.srv.cs.cmu.edu/pipermail/act-r-users/attachments/20130308/6370fb40/attachment.obj>


More information about the ACT-R-users mailing list