Connectionists: Best practices in model publication

james bower bower at uthscsa.edu
Wed Jan 29 18:52:32 EST 2014


Interesting

With respect to the cortical column discussion we didn’t yet have (whether they exist or not), there were actually two papers published by Vernon Mountcastle in the late 1950s in which the cortical column idea was introduced.

The first included mostly the data, the second mostly the idea.

I once plotted literature citations for the two papers.  For the first 10 years, the data paper was cited much more than the theory paper.  However, 15 year out they crossed and now the data paper is almost never sited.

So, as mentioned earlier with Marr and Albus, perhaps it is a kind of theory envy in neuroscience, but it is not at all unusual in neuroscience  to have exactly the opposite be the case, that the data is forgotten and the theory persist.

Perhaps all this is leading to an interesting article (or perhaps book with a series of essays) on how physics and biology are similar and different.  Anyone interested?

As some of you probably know, Thomas Kuhn explicitly excluded biology in his analysis (as a physicist, obviously, he knew that field better).  I have often her biologists say that his analysis does not apply to biology at which point I tell him that it does, he just didn’t talk very much about pre-parigmatic science.

However, I believe that the conversation we have been having about the epistemology of physics and biology might be of considerable more general interest.  I know that this has been commented on by others before (Richard F for example), but I don’t know of any volume of essays on the subject.

Could be interesting

Jim



On Jan 28, 2014, at 10:30 AM, Carson Chow <ccchow at pitt.edu> wrote:

> Hi Brad,
> 
> Philip Anderson, Nobel Prize in Physics, once wrote that theory and experimental results should never be in the same paper. His reason was for the protection of the experiment because if the theory turns out wrong (as is often the case) then people often forget about the data. 
> 
> Carson
> 
> 
> 
> On 1/28/14 8:25 AM, Brad Wyble wrote:
>> Thanks Randal, that's a great suggestion.  I'll ask my colleagues in physics for their perspective as well.  
>> 
>> -Brad
>> 
>> 
>> 
>> 
>> On Mon, Jan 27, 2014 at 11:54 PM, Randal Koene <randal.a.koene at gmail.com> wrote:
>> Hi Brad,
>> This reminds me of theoretical physics, where proposed models are expounded in papers, often without the ability to immediately carry out empirical tests of all the predictions. Subsequently, experiments are often designed to compare and contrast different models.
>> Perhaps a way to advance this is indeed to make the analogy with physics?
>> Cheers,
>> Randal
>> 
>> Dr. Randal A. Koene
>> Randal.A.Koene at gmail.com - Randal.A.Koene at carboncopies.org
>> http://randalkoene.com - http://carboncopies.org
>> 
>> 
>> On Mon, Jan 27, 2014 at 8:29 PM, Brad Wyble <bwyble at gmail.com> wrote:
>> Thank you Mark, I hadn't seen this paper.  She includes this other point that should have been in my list:
>> 
>> "From a practical point of view, as noted the time required to build 
>> and analyze a computational model is quite substantial and validation may 
>> require teams. To delay model presentation until validation has occurred 
>> retards the development of the scientific field. "  ----Carley (1999)
>>  
>> 
>> And here is a citation for this paper.
>> Carley, Kathleen M., 1999. Validating Computational Models. CASOS Working Paper, CMU
>> 
>> -Brad
>> 
>> 
>> 
>> 
>> On Mon, Jan 27, 2014 at 9:48 PM, Mark Orr <mo2259 at columbia.edu> wrote:
>> Brad, 
>> Kathleen Carley, at CMU, has a paper on this idea (from the 1990s), suggesting the same practice. See http://www2.econ.iastate.edu/tesfatsi/EmpValid.Carley.pdf
>> 
>> Mark
>> 
>> On Jan 27, 2014, at 9:39 PM, Brad Wyble wrote:
>> 
>>> Dear connectionists, 
>>> 
>>> I wanted to get some feedback regarding some recent ideas concerning the publication of models because I think that our current practices are slowing down the progress of theory.  At present, at least in many psychology journals, it is often expected that a computational modelling paper includes experimental evidence in favor of  a small handful of its own predictions.  While I am certainly in favor of  model testing, I have come to the suspicion that the practice of including empirical validation within the same paper as the initial model is problematic for several reasons:
>>> 
>>> It encourages the creation only of predictions that are easy to test with the techniques available to the modeller.
>>> 
>>> It strongly encourages a practice of running an experiment, designing a model to fit those results, and then claiming this as a bona fide prediction.  
>>> 
>>> It encourages a practice of running a battery of experiments and reporting only those that match the model's output. 
>>> 
>>> It encourages the creation of predictions which cannot fail, and are therefore less informative
>>> 
>>> It encourages a mindset that a model is a failure if all of its predictions are not validated, when in fact we actually learn more from a failed prediction than a successful one.
>>> 
>>> It makes it easier for experimentalists to ignore models, since such modelling papers are "self contained". 
>>> 
>>> I was thinking that, instead of the current practice, it should be permissible and even encouraged that a modelling paper should not include empirical validation, but instead include a broader array of predictions.  Thus instead of 3 successfully tested predictions from the PI's own lab, a model might include 10 untested predictions for a variety of different experimental techniques. This practice will, I suspect, lead to the development of bolder theories, stronger tests, and most importantly, tighter ties between empiricists and theoreticians.    
>>> 
>>> I am certainly not advocating that modellers shouldn't test their own models, but rather that it should be permissible to publish a model without testing it first. The testing paper could come later.  
>>> 
>>> I also realize that this shift in publication expectations  wouldn't prevent the problems described above, but it would at least not reward them.  
>>> 
>>> I also think that modellers should make a concerted effort to target empirical journals to increase the visibility of models.  This effort should coincide with a shift in writing style to make such models more accessible to non modellers.
>>> 
>>> What do people think of this? If there is broad agreement, what would be the best way to communicate this desire to journal editors?
>>> 
>>> Any advice welcome!
>>> 
>>> -Brad
>>> 
>>> 
>>> 
>>> -- 
>>> Brad Wyble
>>> Assistant Professor
>>> Psychology Department
>>> Penn State University
>>> 
>>> http://wyblelab.com
>> 
>> 
>> 
>> 
>> -- 
>> Brad Wyble
>> Assistant Professor
>> Psychology Department
>> Penn State University
>> 
>> http://wyblelab.com
>> 
>> 
>> 
>> 
>> -- 
>> Brad Wyble
>> Assistant Professor
>> Psychology Department
>> Penn State University
>> 
>> http://wyblelab.com
> 

 

 

Dr. James M. Bower Ph.D.

Professor of Computational Neurobiology

Barshop Institute for Longevity and Aging Studies.

15355 Lambda Drive

University of Texas Health Science Center 

San Antonio, Texas  78245

 

Phone:  210 382 0553

Email: bower at uthscsa.edu

Web: http://www.bower-lab.org

twitter: superid101

linkedin: Jim Bower

 

CONFIDENTIAL NOTICE:

The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or

any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be

immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately.

 


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20140129/8d6e205f/attachment.html>


More information about the Connectionists mailing list