[ACT-R-users] model validation

Erik M. Altmann ema at msu.edu
Sun Sep 14 09:31:20 EDT 2008


One technique that I think is meaningful for testing closed-form 
models (equations, like base-level learning) is to fit the model to 
participant-level data, by computing least squares, then use ANOVA 
with the "source" of the data (human, model) as a within-subjects 
factor. If source interacts with the contrasts of interest in the 
experimental design, then you have evidence of poor fit; the model is 
deviating from the data in a systematic way.

There's an example in Altmann (2006), but one thing I would do 
differently is to use the error term from the human data to compute 
the F statistic for the interactions with source. For example, if 
you're interested in using your model to explain an interaction of 
experimental variables A and B, then run the ANOVA on the human data 
to get the error term for the A x B interaction, then run the ANOVA 
with source as an additional (WS) factor to get the mean square for A 
x B x source interaction. The logic is that the model adds no error 
variance of its own, so an error term based on variance pooled across 
the human and model "conditions" is too conservative (the better your 
model fit, the smaller the error term, so the more likely the test is 
to indicate poor fit).

In general, this approach seems to make sense as long as you can 
obtain a least-squares fit of the model to participant-level data. 
This is easy enough with closed-form models together with some 
parameter search routine like Excel Solver.  In principle it should 
also be possible to do this for simulation models, I think, but it 
may not be practical, given the size of the parameter space and the 
often non-trivial time needed per simulation run.

Along these lines, I think Niels was making reference to a different 
approach, in which one runs the simulation once for each participant 
(say), then compares human to model with "source" as a 
between-subjects factor.  In principle I think this could also be 
meaningful, if the simulation produces meaningful amounts of 
variance, but this means at least reproducing RT distributions as 
well as measures of central tendency.  We tried this in Altmann and 
Gray (2008), but the editor wasn't convinced, so we untried it.  I 
think this is basically uncharted territory.

Seems to be a timely question. I'd welcome any feedback on these 
ideas, and particularly corrections if I've missed something.

Erik.

At 12:51 PM +0400 9/14/08, Fehmida Hussain wrote:
>Sorry for reposting, I was desperately looking for some advice on this:
>
>hi all,
>
>I have a query regarding model validation.
>I have implemented a few ACT-R 6 models of attentional networks simulating
>experimental studies. I have human data available to validate my model. the
>human study itself uses statistics like ANOVA to determine significane of
>the conditions and interactions between variables.
>
>Is is sufficient (from the point of view of model validation ) for me to
>just use Correl and SD to validate the model results against the human data
>or is it better to repeat the statistical test to show the same
>significane? What will be considered better from the point of view of
>defending my thesis?
>
>thanks
>Fehmida
>
>
>_______________________________________________
>ACT-R-users mailing list
>ACT-R-users at act-r.psy.cmu.edu
>http://act-r.psy.cmu.edu/mailman/listinfo/act-r-users


-- 
Erik M. Altmann
Department of Psychology
Michigan State University
East Lansing, MI 48824
517-353-4406 (voice)
517-353-1652 (fax)
http://www.msu.edu/~ema



More information about the ACT-R-users mailing list