[ACT-R-users] Vision: estimate distances and pattern extrapolation?
Dario Salvucci
salvucci at drexel.edu
Mon Mar 11 22:31:17 EDT 2013
David,
As far as I know, the Gaussian assumption in EMMA is probably still a reasonable one. But either way, I'm not sure how it would address your extrapolation problem. I suppose the timing of a saccade could be used as an indicator of length (with a very fast clock, probably faster than ACT-R's timing module). As Dan suggests, calculating this might be the best option, where the calculation is a surrogate for some other process (like past learning of distances) not explicitly represented in the model.
Good luck
Dario
On Mar 8, 2013, at 2:54 PM, David Reitter <reitter at psu.edu> wrote:
> All,
>
> I wonder if you have some ideas on models that could describe pattern recognition or implicit distance estimation.
>
> I am looking at an experiment that requires subjects to estimate the difference between two or more visible dots and extrapolate along line between them to foveate on a spot.
> Alternatively, one could think of it as pre-attentive processing, recognizing the dots and extrapolating the pattern in one direction (and foveate on that spot):
>
>
>
> . . . X
>
>
> (Dots . are shown, and X is where I want to foveate, without anything being shown there.)
>
>
> It seems that the standard vision module does not give me the angle or distance between two screen locations (or finsts), although I could of course calculate that if I had the coordinates. The precision of the estimates is unclear, though. Referring to the ACT-R 6 manual, I don't see how I would get coordinates or estimate distance.
>
> As for the saccadic movement, EMMA would be a good reference point: "Given a saccade to a particular object, the model assumes that the landing point follows a Gaussian distribution around the center of the object. (...)" (Salvucci, 2001) - Is this assumption still state of the art?
>
> (I don't care much for timing in my model.)
>
> There are models of many visual tasks out there (reading, object recognition/WHAT system, eye-movement), but what models explain aspects of pattern recognition or at least distance estimation?
>
> Thanks for your input.
>
>
> ====
>
> Some related literature:
>
> Halverson, An “Active Vision” Computational Model Of Visual Search
> http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA518836
>
> Oleksiak et al, Distance Estimation Is Influenced by Encoding Conditions
> http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2847905/
>
> Salvucci, An integrated model of eye movements and visual encoding [EMMA]
> http://www.sciencedirect.com/science/article/pii/S1389041700000152
> [and various preceding work listed there,
>
>
>
> --
> Dr. David Reitter
> Assistant Professor of Information Sciences and Technology
> Penn State University
> http://www.david-reitter.com
>
>
>
> _______________________________________________
> ACT-R-users mailing list
> ACT-R-users at act-r.psy.cmu.edu
> https://mailman.srv.cs.cmu.edu/mailman/listinfo/act-r-users
______________________________________________________________
Dario Salvucci, Ph.D.
Professor & Associate Department Head for Undergraduate Affairs
Department of Computer Science
Drexel University
http://www.cs.drexel.edu/~salvucci/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/act-r-users/attachments/20130311/61f7d92a/attachment.html>
More information about the ACT-R-users
mailing list