<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Brad, <div>Kathleen Carley, at CMU, has a paper on this idea (from the 1990s), suggesting the same practice. See <a href="http://www2.econ.iastate.edu/tesfatsi/EmpValid.Carley.pdf">http://www2.econ.iastate.edu/tesfatsi/EmpValid.Carley.pdf</a></div><div><br></div><div>Mark</div><div><br><div><div>On Jan 27, 2014, at 9:39 PM, Brad Wyble wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div dir="ltr">Dear connectionists, <div><br></div><div>I wanted to get some feedback regarding some recent ideas concerning the publication of models because I think that our current practices are slowing down the progress of theory. At present, at least in many psychology journals, it is often expected that a computational modelling paper includes experimental evidence in favor of a small handful of its own predictions. While I am certainly in favor of model testing, I have come to the suspicion that the practice of including empirical validation within the same paper as the initial model is problematic for several reasons:</div>
<div><br></div><div><div>It encourages the creation only of predictions that are easy to test with the techniques available to the modeller.</div></div><div><br></div><div>It strongly encourages a practice of running an experiment, designing a model to fit those results, and then claiming this as a bona fide prediction. </div>
<div><br></div><div>It encourages a practice of running a battery of experiments and reporting only those that match the model's output. </div><div><br></div><div>It encourages the creation of predictions which cannot fail, and are therefore less informative</div>
<div><br></div><div>It encourages a mindset that a model is a failure if all of its predictions are not validated, when in fact we actually learn more from a failed prediction than a successful one.</div><div><br></div><div>
It makes it easier for experimentalists to ignore models, since such modelling papers are "self contained". </div><div><br></div><div>I was thinking that, instead of the current practice, it should be permissible and even encouraged that a modelling paper should not include empirical validation, but instead include a broader array of predictions. Thus instead of 3 successfully tested predictions from the PI's own lab, a model might include 10 untested predictions for a variety of different experimental techniques. This practice will, I suspect, lead to the development of bolder theories, stronger tests, and most importantly, tighter ties between empiricists and theoreticians. </div>
<div><br></div><div>I am certainly not advocating that modellers shouldn't test their own models, but rather that it should be permissible to publish a model without testing it first. The testing paper could come later. </div>
<div><br></div><div>I also realize that this shift in publication expectations wouldn't prevent the problems described above, but it would at least not reward them. </div><div><br></div><div>I also think that modellers should make a concerted effort to target empirical journals to increase the visibility of models. This effort should coincide with a shift in writing style to make such models more accessible to non modellers.</div>
<div><br></div><div>What do people think of this? If there is broad agreement, what would be the best way to communicate this desire to journal editors?</div><div><br></div><div>Any advice welcome!</div><div><br></div><div>
-Brad<br></div><div><br></div><div><br></div><div><br></div><div>-- <br><div dir="ltr">Brad Wyble<br>Assistant Professor<br>Psychology Department<br>Penn State University<div><br></div><div><a href="http://wyblelab.com/" target="_blank">http://wyblelab.com</a></div>
</div>
</div></div>
</blockquote></div><br></div></body></html>