Connectionists: Best practices in model publication

Mark Orr mo2259 at columbia.edu
Mon Jan 27 21:48:39 EST 2014


Brad, 
Kathleen Carley, at CMU, has a paper on this idea (from the 1990s), suggesting the same practice. See http://www2.econ.iastate.edu/tesfatsi/EmpValid.Carley.pdf

Mark

On Jan 27, 2014, at 9:39 PM, Brad Wyble wrote:

> Dear connectionists, 
> 
> I wanted to get some feedback regarding some recent ideas concerning the publication of models because I think that our current practices are slowing down the progress of theory.  At present, at least in many psychology journals, it is often expected that a computational modelling paper includes experimental evidence in favor of  a small handful of its own predictions.  While I am certainly in favor of  model testing, I have come to the suspicion that the practice of including empirical validation within the same paper as the initial model is problematic for several reasons:
> 
> It encourages the creation only of predictions that are easy to test with the techniques available to the modeller.
> 
> It strongly encourages a practice of running an experiment, designing a model to fit those results, and then claiming this as a bona fide prediction.  
> 
> It encourages a practice of running a battery of experiments and reporting only those that match the model's output.  
> 
> It encourages the creation of predictions which cannot fail, and are therefore less informative
> 
> It encourages a mindset that a model is a failure if all of its predictions are not validated, when in fact we actually learn more from a failed prediction than a successful one.
> 
> It makes it easier for experimentalists to ignore models, since such modelling papers are "self contained". 
> 
> I was thinking that, instead of the current practice, it should be permissible and even encouraged that a modelling paper should not include empirical validation, but instead include a broader array of predictions.  Thus instead of 3 successfully tested predictions from the PI's own lab, a model might include 10 untested predictions for a variety of different experimental techniques. This practice will, I suspect, lead to the development of bolder theories, stronger tests, and most importantly, tighter ties between empiricists and theoreticians.    
> 
> I am certainly not advocating that modellers shouldn't test their own models, but rather that it should be permissible to publish a model without testing it first. The testing paper could come later.  
> 
> I also realize that this shift in publication expectations  wouldn't prevent the problems described above, but it would at least not reward them.  
> 
> I also think that modellers should make a concerted effort to target empirical journals to increase the visibility of models.  This effort should coincide with a shift in writing style to make such models more accessible to non modellers.
> 
> What do people think of this? If there is broad agreement, what would be the best way to communicate this desire to journal editors?
> 
> Any advice welcome!
> 
> -Brad
> 
> 
> 
> -- 
> Brad Wyble
> Assistant Professor
> Psychology Department
> Penn State University
> 
> http://wyblelab.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20140127/c364b563/attachment.html>


More information about the Connectionists mailing list