[ACT-R-users] Spatial and temporal granularity in models of perception
David Pautler
david at wooden-robot.net
Sat Aug 8 12:02:40 EDT 2009
I'm interested in how people attribute high-level descriptions such as
"chase" to the movement of simple animations such as a pair of black dots
on a white background. I'm looking for advice on how to make the input for
a cognitive model of this phenomenon similar to that which a human would
have, so that the performance of the two can be compared.
For example, I have a particular animated scene I want to use, and I could
render it with arbitrarily-precise positioning and sizes of the dots. And
I could choose an arbitrarily high number of frames/time slices. But it
seems that the granularity of both should be determined by some set of
"just noticeable differences (JNDs)".
Beyond granularity, another problem is segmentation of motion
trajectories. I have found a few papers that propose algorithms for
segmentation, but it's not clear that they have a cognitive basis.
I've dipped into the psychophysics and cognitive modeling literature
(particularly EPIC and cognitive maps), and I thought I might get some
good pointers here about whether there is much agreement about how to
handle granularity and segmentation.
Any recommendations?
Regards,
David Pautler
More information about the ACT-R-users
mailing list