Connectionists: Best practices in model publication

Levine, Daniel S levine at uta.edu
Mon Jan 27 23:51:15 EST 2014


Brad,

As a resident modeler within a psychology department (though my students run behavioral experiments) I am sensitive to this issue and heartily agree with you.  As we become a mature science there will need to be an acceptance of theory having a life of its own and a roughly equal partner with experiment as it is in physics.  There is often a long time lag from a successful simulation to setting up and successfully running an experiment to test its predictions, and that time lag shouldn't slow down the publication of the theory itself.  After all there are mountains of existing data in the literature that need to be understood in the context of a sound theory, and a published theory can suggests experimental tests to other researchers who are reading it.


Best,

Dan Levine
________________________________________
From: Connectionists [connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of Brad Wyble [bwyble at gmail.com]
Sent: Monday, January 27, 2014 8:39 PM
To: Connectionists
Subject: Connectionists: Best practices in model publication

Dear connectionists,

I wanted to get some feedback regarding some recent ideas concerning the publication of models because I think that our current practices are slowing down the progress of theory.  At present, at least in many psychology journals, it is often expected that a computational modelling paper includes experimental evidence in favor of  a small handful of its own predictions.  While I am certainly in favor of  model testing, I have come to the suspicion that the practice of including empirical validation within the same paper as the initial model is problematic for several reasons:

It encourages the creation only of predictions that are easy to test with the techniques available to the modeller.

It strongly encourages a practice of running an experiment, designing a model to fit those results, and then claiming this as a bona fide prediction.

It encourages a practice of running a battery of experiments and reporting only those that match the model's output.

It encourages the creation of predictions which cannot fail, and are therefore less informative

It encourages a mindset that a model is a failure if all of its predictions are not validated, when in fact we actually learn more from a failed prediction than a successful one.

It makes it easier for experimentalists to ignore models, since such modelling papers are "self contained".

I was thinking that, instead of the current practice, it should be permissible and even encouraged that a modelling paper should not include empirical validation, but instead include a broader array of predictions.  Thus instead of 3 successfully tested predictions from the PI's own lab, a model might include 10 untested predictions for a variety of different experimental techniques. This practice will, I suspect, lead to the development of bolder theories, stronger tests, and most importantly, tighter ties between empiricists and theoreticians.

I am certainly not advocating that modellers shouldn't test their own models, but rather that it should be permissible to publish a model without testing it first. The testing paper could come later.

I also realize that this shift in publication expectations  wouldn't prevent the problems described above, but it would at least not reward them.

I also think that modellers should make a concerted effort to target empirical journals to increase the visibility of models.  This effort should coincide with a shift in writing style to make such models more accessible to non modellers.

What do people think of this? If there is broad agreement, what would be the best way to communicate this desire to journal editors?

Any advice welcome!

-Brad



--
Brad Wyble
Assistant Professor
Psychology Department
Penn State University

http://wyblelab.com



More information about the Connectionists mailing list