Connectionists: Best practices in model publication

Carson Chow ccchow at pitt.edu
Tue Jan 28 11:30:24 EST 2014


Hi Brad,

Philip Anderson, Nobel Prize in Physics, once wrote that theory and 
experimental results should never be in the same paper. His reason was 
for the protection of the experiment because if the theory turns out 
wrong (as is often the case) then people often forget about the data.

Carson



On 1/28/14 8:25 AM, Brad Wyble wrote:
> Thanks Randal, that's a great suggestion.  I'll ask my colleagues in 
> physics for their perspective as well.
>
> -Brad
>
>
>
>
> On Mon, Jan 27, 2014 at 11:54 PM, Randal Koene 
> <randal.a.koene at gmail.com <mailto:randal.a.koene at gmail.com>> wrote:
>
>     Hi Brad,
>     This reminds me of theoretical physics, where proposed models are
>     expounded in papers, often without the ability to immediately
>     carry out empirical tests of all the predictions. Subsequently,
>     experiments are often designed to compare and contrast different
>     models.
>     Perhaps a way to advance this is indeed to make the analogy with
>     physics?
>     Cheers,
>     Randal
>
>     Dr. Randal A. Koene
>     Randal.A.Koene at gmail.com <mailto:Randal.A.Koene at gmail.com> -
>     Randal.A.Koene at carboncopies.org
>     <mailto:Randal.A.Koene at carboncopies.org>
>     http://randalkoene.com - http://carboncopies.org
>
>
>     On Mon, Jan 27, 2014 at 8:29 PM, Brad Wyble <bwyble at gmail.com
>     <mailto:bwyble at gmail.com>> wrote:
>
>         Thank you Mark, I hadn't seen this paper.  She includes this
>         other point that should have been in my list:
>
>         "From a practical point of view, as noted the time required to
>         build
>         and analyze a computational model is quite substantial and
>         validation may
>         require teams. To delay model presentation until validation
>         has occurred
>         retards the development of the scientific field. "  ----Carley
>         (1999)
>
>         And here is a citation for this paper.
>         Carley, Kathleen M., 1999. Validating Computational Models.
>         CASOS Working Paper, CMU
>
>         -Brad
>
>
>
>
>         On Mon, Jan 27, 2014 at 9:48 PM, Mark Orr <mo2259 at columbia.edu
>         <mailto:mo2259 at columbia.edu>> wrote:
>
>             Brad,
>             Kathleen Carley, at CMU, has a paper on this idea (from
>             the 1990s), suggesting the same practice. See
>             http://www2.econ.iastate.edu/tesfatsi/EmpValid.Carley.pdf
>
>             Mark
>
>             On Jan 27, 2014, at 9:39 PM, Brad Wyble wrote:
>
>>             Dear connectionists,
>>
>>             I wanted to get some feedback regarding some recent ideas
>>             concerning the publication of models because I think that
>>             our current practices are slowing down the progress of
>>             theory.  At present, at least in many psychology
>>             journals, it is often expected that a computational
>>             modelling paper includes experimental evidence in favor
>>             of  a small handful of its own predictions.  While I am
>>             certainly in favor of  model testing, I have come to the
>>             suspicion that the practice of including empirical
>>             validation within the same paper as the initial model is
>>             problematic for several reasons:
>>
>>             It encourages the creation only of predictions that are
>>             easy to test with the techniques available to the modeller.
>>
>>             It strongly encourages a practice of running an
>>             experiment, designing a model to fit those results, and
>>             then claiming this as a bona fide prediction.
>>
>>             It encourages a practice of running a battery of
>>             experiments and reporting only those that match the
>>             model's output.
>>
>>             It encourages the creation of predictions which cannot
>>             fail, and are therefore less informative
>>
>>             It encourages a mindset that a model is a failure if all
>>             of its predictions are not validated, when in fact we
>>             actually learn more from a failed prediction than a
>>             successful one.
>>
>>             It makes it easier for experimentalists to ignore models,
>>             since such modelling papers are "self contained".
>>
>>             I was thinking that, instead of the current practice, it
>>             should be permissible and even encouraged that a
>>             modelling paper should not include empirical validation,
>>             but instead include a broader array of predictions.  Thus
>>             instead of 3 successfully tested predictions from the
>>             PI's own lab, a model might include 10 untested
>>             predictions for a variety of different experimental
>>             techniques. This practice will, I suspect, lead to the
>>             development of bolder theories, stronger tests, and most
>>             importantly, tighter ties between empiricists and
>>             theoreticians.
>>
>>             I am certainly not advocating that modellers shouldn't
>>             test their own models, but rather that it should be
>>             permissible to publish a model without testing it first.
>>             The testing paper could come later.
>>
>>             I also realize that this shift in publication
>>             expectations  wouldn't prevent the problems described
>>             above, but it would at least not reward them.
>>
>>             I also think that modellers should make a concerted
>>             effort to target empirical journals to increase the
>>             visibility of models.  This effort should coincide with a
>>             shift in writing style to make such models more
>>             accessible to non modellers.
>>
>>             What do people think of this? If there is broad
>>             agreement, what would be the best way to communicate this
>>             desire to journal editors?
>>
>>             Any advice welcome!
>>
>>             -Brad
>>
>>
>>
>>             -- 
>>             Brad Wyble
>>             Assistant Professor
>>             Psychology Department
>>             Penn State University
>>
>>             http://wyblelab.com <http://wyblelab.com/>
>
>
>
>
>         -- 
>         Brad Wyble
>         Assistant Professor
>         Psychology Department
>         Penn State University
>
>         http://wyblelab.com
>
>
>
>
>
> -- 
> Brad Wyble
> Assistant Professor
> Psychology Department
> Penn State University
>
> http://wyblelab.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20140128/cb6d8636/attachment.html>


More information about the Connectionists mailing list