Connectionists: Brain-like computing fanfare and big data fanfare

james bower bower at uthscsa.edu
Mon Jan 27 12:55:25 EST 2014


Good points Carson

However, I think one problem with this conversation is that it is (ironically given the number of engineers here) focused on outcomes rather than technology.

25 years ago, we introduced a modeling system (GENESIS) specifically with the intent to start to build an infrastructure (as a community i.e. one of the first open source modeling platforms out there), so that people could share models leading to community models and start to use the models (rather than story telling) as a way to communicate and collaborate.  (Remember the Rochester connectionist simulator - similar idea).  

The paper I linked to in my first posting describes the process that lead to establishment of what is at this point in time I would claim, the only community single cell model in computational neuroscience.

Several years ago we wrote several proposals to NIH and NSF about making explicit what was always implicit and building a new form of Journal around GENESIS - and in particular GENESIS version 3  - that would base publications on models rather than stories with pictures.  

Looking back to Newton’s time - the invention of the scientific journal and before that the invention of the printing press as a technology, drove huge change not only in science, but also in society as a whole.

However, as you say, physics is the study of simple things, where, in principle (and certainly in those days) you could publish  a sufficient level of equations in that form of journal so that others knew what you were talking about and could contribute.

As I said in this sequence earlier, closed form solutions, no matter how attractive, aren’t likely to be able to represent the complexities of the nervous system, which instead will (and have) depended on numerical simulations.

It is obviously absurd to take a digital entity (such a model), and convert it into a journal article build around pictures to publish in an e-journal. 

Anyway, much more can be said about that, - but the point is that those GENESIS grants were rejected, at least in part, because the study sections were dominated by those who believe that we shouldn’t be modeling things at this level of complexity.

This point of view is now fully represented in the version of the summer courses derived from the one I started with Christof Koch many years ago, in the CNS meeting I started, and in the Journal of Computational Neuroscience I started as well (as a vehicle to actually start publishing models - had the publishers and the editorial board understood why this was important.)

Until one week ago (and for the last 3 years), I was the co-coordinator of the Neuroscience Group of NIH’s interdisciplinary multi-scale modeling initiative (IMAG).  Sitting on the review committee for that program, I once again found myself defending biologically based modeling - to no avail.

So, in fact I believe the solution to this problem has to do with developing the right technology for collaboration and communication, which we now have with the Internet - combined with a commitment by funding agencies to support its development and use.  Complex compartmental models have no chance competitively in a world where a theory that can be boiled down and published in some from of an energy functional - that can be understood by everyone on this list.

We have a communication problem related to the complexity of the right technology, combined with the lack of the use of the right technology to have larger number of people understand the value of the approach.

Instead of all this philosophy, I would MUCH RATHER, have someone on this list actually read the paper I linked on 40 years of realistically modeling the cerebellar Purkinje cell - and then not have to use analogies from well understood physics (and engineering) to discuss a question routed in Biology.  Lets use the Purkinje cell as a for instance and talk about science, rather than philosophy.  The model is even there for you to pick apart and understand (one reason it because the first cellular community model, we actually provided full access to it on the internet).  So how about it guys????

Anyway, returning to neuroscience,  the whole field is dominated by story tellers spinning yarns and myths, who have no interest in having anyone else have the tools to check their stories really (i.e. by putting them in the common language of mathematics)  Sadly, they continue to be able to convince the government to fund story telling and the construction of the kinds of tools that facilitate story telling (BRAIN project and neuroimaging).

I also spent 10 years as part of the original Human Brain Project at NIH - which was supposed to foster collaborating technology and didn’t for the same reason.


Foolishly someone last year asked me to speak to a bunch of graduate students and postdocs about how to have a successful career in science.  Foolish, of course, because I am not sure that mine applies.  Doubly foolish because, as I told them, they probably don’t want graduate and postdoctoral students taking advice from me anyway.

they persisted so I did:

I told them it was easy, pick a story, any story and stick to it - and best if the story can be well understood in 1 1/2 pages or less.

One of the students asked what you should do if your data or someone else’s data doesn’t support your story.  I told them, bury or “smooth” your own data, and do your best to make sure that the other guys data isn’t published and their grant isn’t renewed.  Also, for sure exclude that person as a reviewer on your next papers - and NEVER reference their work in your paper.

One of the students then said, “ that doesn’t sound much like science”

I said: I wasn’t asked to tell you how to do science, I was asked to tell you how to have a successful scientific career.

:-)

Jim



On Jan 27, 2014, at 11:14 AM, Carson Chow <carsonc at mail.nih.gov> wrote:

> I am greatly enjoying this discussion. 
> 
> The original complaint in this thread seems to be that the main problem of (computational) neuroscience is that people do not build upon the work of others enough. In physics, no one reads the original works of Newton or Einstein, etc., anymore. There is a set canon of knowledge that everyone learns from classes and textbooks. Often if you actually read the original works you'll find that what the early greats actually did and believed differs from the current understanding.  I think it's safe to say that computational neuroscience has not reached that level of maturity.  Unfortunately, it greatly impedes progress if everyone tries to redo and reinvent what has come before.  
> 
> The big question is why is this the case. This is a search problem. It could be true that one of the proposed approaches in this thread or some other existing idea is optimal but the opportunity cost to follow it is great. How do we know it is the right one?  It is safer to just follow the path we already know. We simply all don't believe enough in any one idea for all of us to pursue it right now. It takes a massive commitment to learn any one thing much less everything on John Weng's list. I don't know too many people who could be fully conversant in math, AI, cognitive science, neurobiology, and molecular biology.  There are only so many John Von Neumanns, Norbert Wieners or Terry Taos out there. The problem actually gets worse with more interest and funding because there will be even more people and ideas to choose from. This is a classic market failure where too many choices destroys liquidity and accurate pricing. My prediction is that we will continue to argue over these points until one or a small set of ideas finally wins out.  But who is to say that thirty years is a long time. There were almost two millennia between Ptolomy and Kepler. However, once the correct idea took hold it was practically a blink of an eye to get from Kepler to Maxwell.  However, physics is so much simpler than neuroscience. In fact, my definition of physics is the field of easily model-able things. Whether or not a similar revolution will ever take place in neuroscience remains to be seen. 
> 
> ----------------------
> Carson C Chow
> LBM, NIDDK, NIH
> 
> On Jan 26, 2014, at 22:05, james bower <bower at uthscsa.edu> wrote:
> 
>> Thanks Danny, Funny about coincidences.
>> 
>> I almost posted earlier to the list a review I was asked to write of exactly the book you reference:
>> 
>>   23 Problems in Systems Neuroscience, edited by L. Van Hemmen and T. Sejnowski.
>> 
>> It is appended to this email - 
>> 
>> Needless to say, while I couldn’t agree with you more on the importance of asking the right questions - many of the chapters in this book make clear, I believe, the fundamental underlying problem posed by having no common theoretical basis for neuroscience research.
>> 
>> 
>> Jim Bower
>> 
>> 
>> 
>> Published in The American Scientist
>> 
>> Are We Ready for Hilbert?
>> 
>> James M. Bower
>> 
>>  
>> 
>> 23 Problems in Systems Neuroscience. Edited by J. Leo van Hemmen and Terrence J. Sejnowski. xvi + 514 pp. Oxford University Press, 2006. $79.95.
>> 
>>  
>> 
>> 23 Problems in Systems Neuroscience grew out of a symposium held in Dresden in 2000 inspired by an address given by the great geometrist David Hilbert 100 years earlier. In his speech, Hilbert commemorated the start of the 20th century by delivering what is now regarded as one of the most influential mathematical expositions ever made. He outlined 23 essential problems that not only organized subsequent research in the field, but also clearly reflected Hilbert’s axiomatic approach to the further development of mathematics. Anticipating his own success, he began, “Who of us would not be glad to lift the veil behind which the future lies hidden; to cast a glance at the next advances of our science and at the secrets of its development during future centuries?”
>> 
>> I take seriously the premise represented in this new volume’s title and preface that it is intended to “serve as a source of inspirations for future explorers of the brain.” Unfortunately, if the contributors sought to exert a “Hilbertian” influence on the field by highlighting 23 of the most important problems in systems neuroscience, they have, in my opinion, failed. In failing, however, this book clearly illustrates fundamental differences between neuroscience (and biology in general) today and mathematics (and physics) in 1900.
>> 
>> Implicit in Hilbert’s approach is the necessity for some type of formal structure underlying the problems at hand, allowing other investigators to understand their natures and then collaboratively explore a general path to their solutions. Yet there is little consistency in the form of problems presented in this book. Instead, many (perhaps most) of the chapters are organized, at best, around vague questions such as, “How does the cerebral cortex work?” At worst, the authors simply recount what is, in effect, a story promoting their own point of view.
>> 
>> The very first chapter, by Gilles Laurent, is a good example of the latter. After starting with a well-worn plea for considering the results of the nonmammalian, nonvisual systems he works on, Laurent summarizes a series of experiments (many of them his own) supporting his now-well-known position regarding the importance of synchrony in neuronal coding. This chapter could have presented a balanced discussion of the important questions surrounding the nature of the neural code (as attempted in one chapter by David McAlpine and Alan R. Palmer and another by C. van Vreeswijk), or even referenced and discussed some of the recently published papers questioning his interpretations. Instead, the author chose to attempt to convince us of his own particular solution.
>> 
>> I don’t mean to pick on Laurent, as his chapter takes a standard form in symposia volumes; rather, his approach illustrates the general point that much of “systems neuroscience” (and neuroscience in general) revolves around this kind of storytelling. The chapter by Bruno A. Olshausen and David J. Field makes this point explicitly, suggesting, that our current “story based” view of the function of the well-studied visual cortex depends on (1) a biased sampling of neurons, (2) a bias in the kind of stimuli we present and (3) a bias in the kinds of theories we like to construct.
>> 
>> In fairness, several chapters do attempt to address real  problems in a concise and unbiased way. The chapter by L. F. Abbott, for example, positing, I think correctly, that the control of the flow of information in neural systems is a central (and unsolved) problem, is characteristically clear, circumscribed and open-minded. Refreshingly, Abbott’s introduction states, “In the spirit of this volume, the point of this contribution is to raise a question, not to answer it. . . . I have my prejudices, which will become obvious, but I do not want to rule out any of these as candidates, nor do I want to leave the impression that the list is complete or that the problem is in any sense solved.” Given his physics background, Abbott may actually understand enough about Hilbert’s contribution to have sought its spirit. Most chapters, however, require considerable detective work, and probably also a near-professional understanding of the field, to find anything approaching Hilbert’s enumeration of fundamental research problems.
>> 
>> In some sense I don’t think the authors are completely to blame. Although many are prominent in the field, this lack of focus on more general and well defined problems is, I believe, endemic in biology as a whole. While this may slowly be changing, the question of how and even if biology can move from a fundamentally descriptive, story-based science to one from which Hilbertian-style problems can be extracted may be THE problem in systems neuroscience. A few chapters do briefly raise this issue. For example, in their enjoyable article on synesthesia, V. S. Ramachandran and Edward M. Hubbard identify their approach as not fashionable in psychology partly because of “the lingering pernicious effect of behaviorism” and partly because “psychologists like to ape mature quantitative physics—even if the time isn’t ripe.”
>> 
>> Laurenz Wiskott, in his chapter on possible mechanisms for size and shift invariance in visual (and perhaps other) cortices raises what may be the more fundamental question as to whether biology is even amenable to the form of quantification and explanation that has been so successful in physics:
>> 
>>  
>> 
>> “Either the brain solves all invariance problems in a similar way based on a few basic principles or it solves each invariance problem in a specific way that is different from all others. In the former case [asking] the more general question would be appropriate. . . . In the latter case, that is, if all invariance problems have their specific solution, the more general question would indeed be a set of questions and as such not appropriate to be raised and discussed here.”
>> 
>>  
>> 
>> He then moderates the dichotomy by stating diplomatically, ”There is, of course, a third and most likely alternative, and that is that the truth lies somewhere between these two extremes.” Thus, Wiskott leaves unanswered the fundamental question about the generality of brain mechanisms or computational algorithms. As in mathematics 100 years ago, answering basic questions in systems neuroscience is tied up in assumptions regarding appropriate methodology. For Hilbert’s colleagues, this was obvious and constituted much of the debate following his address; this fundamental issue, however, is only rarely discussed in biology.
>> 
>> Indeed, I want to be careful not to give the impression that these kinds of big-picture issues are given prominence in this volume—they are not. Rather, as is typical for books generated by these kinds of symposia, many of the chapters are simply filled with the particular details of a particular subject, although several authors should be commended for at least discussing their favorite systems in several species. However, given the lack of overall coordination, one wonders what impact this volume will have.
>> 
>> One way to gauge the answer is to look for evidence that the meeting presentations influenced the other participants. As an exercise, I summarized the major points and concerns each author raised in their chapters and then checked that list against the assumptions and assertions made by the other authors writing on similar subjects. The resulting tally, I would assert, provides very little evidence that these authors attended the same meeting—or perhaps even that they are part of the same field!
>> 
>> For example, the article titled “What Is Fed Back” by Jean Bullier identifies, I think correctly, what will become a major shift in thinking about how brains are organized. As Bullier notes, there is growing evidence that the internal state of the brain has a much more profound effect on the way the brain processes sensory information than previously suspected. Yet this fundamental issue is scarcely mentioned in the other chapters, quite a few of which are firmly based on the old feed-forward “behaviorist” model of brain function. Similarly, the chapter by Olshausen and Field is followed immediately by a paper by Steven W. Zucker on visual processing that depends on many of the assumptions that Olshausen and Field call into question.
>> 
>> One hundred years ago, Hilbert’s 23 questions organized a field. The chapters in this book make pretty clear that we are still very far away from having a modern-day Hilbert or even a committee of “experts” come up with a list of 23 fundamental questions that are accepted, or perhaps even understood, by the field of neuroscience as a whole.
>> 
>> 
>> 
>>> 
>>> Asking good questions that come with well developed requirements is the starting point to good science.  At least that is what we tell our graduate students. 
>>> 
>>> .. Danny
>>> 
>>> =======================
>>> Daniel L. Silver, Ph.D.       danny.silver at acadiau.ca
>>> Professor,  Jodrey School of Computer Science,   Acadia University
>>> Office 314, Carnegie Hall,     Wolfville, NS  Canada  B4P 2R6
>>> p:902-585-1413              f:902-585-1067
>>> 
>>> 
>>> From: Geoffrey Hinton <geoffrey.hinton at gmail.com>
>>> Date: Sunday, 26 January, 2014 3:43 PM
>>> To: Brad Wyble <bwyble at gmail.com>
>>> Cc: Connectionists list <connectionists at cs.cmu.edu>
>>> Subject: Re: Connectionists: Brain-like computing fanfare and big data fanfare
>>> 
>>> I can no longer resist making one point. 
>>> 
>>> A lot of the discussion is about telling other people what they should NOT be doing. I think people should just get on and do whatever they think might work.  Obviously they will focus on approaches that make use of their particular skills. We won't know until afterwards which approaches led to major progress and which were dead ends. Maybe a fruitful approach is to  model every connection in a piece of retina in order to distinguish between detailed theories of how cells get to be direction selective. Maybe its building huge and very artificial neural nets that are much better than other approaches at some difficult task.  Probably its both of these and many others too. The way to really slow down the expected rate of progress in understanding how the brain works is to insist that there is one right approach and nearly all the money should go to that approach.  
>>> 
>>> Geoff
>>> 
>>> 
>>> 
>>> On Sat, Jan 25, 2014 at 3:00 PM, Brad Wyble <bwyble at gmail.com> wrote:
>>>> I am extremely pleased to see such vibrant discussion here and my thanks to Juyang for getting the ball rolling.
>>>> 
>>>> Jim, I appreciate  your comments and I agree in large measure, but I have always disagreed with you as regards the necessity of simulating everything down to a lowest common denominator .  Like you, I enjoy drawing lessons from the history of other disciplines, but unlike you, I don't think the analogy between neuroscience and physics is all that clear cut.  The two fields deal with vastly different levels of complexity and therefore I don't think it should be expected that they will (or should) follow the same trajectory.  
>>>> 
>>>> To take your Purkinje cell example, I imagine that there are those who view any such model that lacks an explicit simulation of the RNA as being incomplete.  To such a person, your models would also be unfit for the literature. So would we then change the standards such that no model can be published unless it includes an explicit simulation of the RNA?  And why stop there?  Where does it end?  In my opinion, we can't make effective progress in this field if everyone is bound to the molecular level.  
>>>> 
>>>> I really think that neuroscience presents a fundamental challenge that is not present in physics, which is that progress can only occur when theory is developed at different levels of abstraction that overlap with one another.  The challenge is not how to force everyone to operate at the same level of formal specificity, but how to allow effective communication between researchers operating at different levels.  
>>>> 
>>>> In aid of meeting this challenge, I think that our field should take more inspiration from engineering, a  model-based discipline that already has to work simultaneously at many different scales of complexity and abstraction. 
>>>> 
>>>> 
>>>> Best, 
>>>> Brad Wyble
>>>> 
>>>> 
>>>> 
>>>> 
>>>> On Sat, Jan 25, 2014 at 9:59 AM, james bower <bower at uthscsa.edu> wrote:
>>>>> Thanks for your comments Thomas, and good luck with your effort.
>>>>> 
>>>>> I can’t refrain myself from making the probably culturist remark that this seems a very practical approach.
>>>>> 
>>>>> I have for many years suggested that those interested in advancing biology in general and neuroscience in particular to a ‘paradigmatic’ as distinct from a descriptive / folkloric science, would benefit from understanding this transition as physics went through it in the 15th and 16th centuries.  In many ways, I think that is where we are today, although with perhaps the decided disadvantage that we have a lot of physicists around who, again in my view, don’t really understand the origins of their own science.  By that, I mean, that they don’t understand how much of their current scientific structure, for example the relatively clean separation between ‘theorists’ and ‘experimentalists’, is dependent on the foundation build by those (like Newton) who were both in an earlier time.  Once you have a sold underlying computational foundation for a science, then you have the luxury of this kind of specialization - as there is a framework that ties it all together.  The Higgs effort being a very visible recent example.
>>>>> 
>>>>> Neuroscience has nothing of the sort.  As I point out in the article I linked to in my first posting - while it was first proposed 40 years ago (by Rodolfo Llinas) that the cerebellar Purkinje cell had active dendrites (i.e. that there were non directly-synaptically associated voltage dependent ion channels in the dendrite that governed its behavior), and 40 years of anatomically and physiologically realistic modeling has been necessary to start to understand what they do - many cerebellar modeling efforts today simply ignore these channels.  While that again, to many on this list, may seem too far buried in the details, these voltage dependent channels make the Purkinje cell the computational device that it is.  
>>>>> 
>>>>> Recently, I was asked to review a cerebellar modeling paper in which the authors actually acknowledged that their model lacked these channels because they would  have been too computationally expensive to include.  Sadly for those authors, I was asked to review the paper for the usual reason - that several of our papers were referenced accordingly.  They likely won’t make that mistake again - as after of course complementing them on the fact that they were honest (and knowledgable) enough to have remarked on the fact that their Purkinje cells weren’t really Purkinje cells - I had to reject the paper for the same reason.
>>>>> 
>>>>> As I said, they likely won’t make that mistake again - and will very likely get away with it.
>>>>> 
>>>>> Imagine a comparable situation in a field (like physics) which has established a structural base for its enterprise.  “We found it computational expedient to ignore the second law of thermodynamics in our computations - sorry”.  BTW, I know that details are ignored all the time in physics as one deals with descriptions at different levels of scale - although even there, the field clearly would like to have a way to link across different levels of scale.   I would claim, however, that that is precisely the “trick’ that biology uses to ‘beat’ the second law - linking all levels of scale together - another reason why you can’t ignore the details in biological models if  you really want to understand how biology works.  (too cryptic a comment perhaps).
>>>>> 
>>>>> Anyway, my advice would be to consider how physics made this transition many years ago, and ask the question how neuroscience (and biology) can now.  Key points I think are:
>>>>> - you need to produce students who are REALLY both experimental and theoretical (like Newton).  (and that doesn’t mean programs that “import” physicists and give them enough biology to believe they know what they  are doing, or programs that link experimentalists to physicists to solve their computational problems)
>>>>> - you need to base the efforts on models (and therefore mathematics) of sufficient complexity to capture the physical reality of the system being studied (as Kepler was forced to do to make the sun centric model of the solar system even as close to as accurate as the previous earth centered system)
>>>>> - you need to build a new form of collaboration and communication that can support the complexity of those models.  Fundamentally, we continue to use the publication system (short papers in a journal) that was invented as part of the transformation for physics way back then.  Our laboratories are also largely isolated and non-cooperative, more appropriate for studying simpler things (like those in physics).  Fortunate for us, we have a new communication tool (the Internet) although, as can be expected, we are mostly using it to reimplement old style communication systems (e-journals) with a few twists (supplemental materials).
>>>>> - funding agencies need to insist that anyone doing theory needs to be linked to the experimental side REALLY, and vice versa.  I proposed a number of years ago to NIH that they would make it into the history books if they simply required the following monday,  that any submitted experimental grant include a REAL theoretical and computational component - Sadly, they interpreted that as meaning that P.I.s should state "an hypothesis" - which itself is remarkable, because most of the ‘hypotheses’ I see stated in Federal grants are actually statements of what the P.I. believes to be true.  Don’t get me started on human imaging studies.  arggg
>>>>> - As long as we are talking about what funding agencies can do, how about the following structure for grants - all grants need to be submitted collaboratively by two laboratories who have different theories (better models) about how a particular part of the brain works.  The grant should support at set of experiments, that both parties agree distinguish between their two points of view.  All results need to be published with joint authorship.  In effect that is how physics works - given its underlying structure.
>>>>> - You need to get rid, as quickly as possible, the pressure to “translate” neuroscience research explicitly into clinical significance - we are not even close to being able to do that intentionally - and the pressure (which is essentially a give away to the pharma and bio-tech industries anyway) is forcing neurobiologists to link to what is arguably the least scientific form of research there is - clinical research.  It just has to be the case that society needs to understand that an investment in basic research will eventually result in all the wonderful outcomes for humans we would all like, but this distortion now is killing real neuroscience just at a critical time, when we may finally have the tools to make the transition to a paradigmatic science.  
>>>>> As some of you know, I have been all about trying to do these things for many years - with the GENESIS project, with the original CNS graduate program at Caltech, with the CNS meetings, (even originally with NIPS) and with the first  ‘Methods in Computational Neuroscience Course" at the Marine Biological laboratory, whose latest incarnation in Brazil (LASCON) is actually wrapping up next week, and of course with my own research and students.  Of course, I have not been alone in this, but it is remarkable how little impact all that has had on neuroscience or neuro-engineering.  I have to say, honestly, that the strong tendency seems to be for these efforts to snap back to the non-realistic, non-biologically based modeling and theoretical efforts.
>>>>> 
>>>>> Perhaps Canada, in its usual practical and reasonable way (sorry) can figure out how to do this right.
>>>>> 
>>>>> I hope so.
>>>>> 
>>>>> Jim
>>>>> 
>>>>> p.s. I have also been proposing recently that we scuttle the ‘intro neuroscience’ survey courses in our graduate programs (religious instruction)  and instead organize an introductory course built around the history of the discovery of the origin of the axon potential that culminated in the first (and last) Nobel prize work in computational neuroscience for the Hodkin Huxley model.  The 50th anniversary of that prize was celebrated last year, and the year before I helped to organize a meeting celebrating the 60th anniversary of the publication of the original papers (which I care much more about anyway).  That meeting was, I believe, the first meeting in neuroscience ever organized around a single (mathematical) model or theory - and in organizing it, I required all the speakers to show the HH model on their first slide, indicating which term or feature of the model their work was related to.  Again, a first - but possible, as this is about the only “community model’ we have.
>>>>> 
>>>>> Most Neuroscience textbooks today don’t include that equation (second order differential) and present the HH model primarily as a description of the action potential.   Most theorists regard the HH model as a prime example of how progress can be made by ignoring the biological details.  Both views and interpretations are historically and practically incorrect.  In my opinion, if you can’t handle the math in the HH model, you shouldn’t be a neurobiologist, and if you don’t understand the profound impact of HH’s knowledge and experimental study of the squid giant axon on the model,  you shouldn’t be a neuro-theorist either.  just saying.   :-)
>>>>> 
>>>>> 
>>>>> On Jan 25, 2014, at 6:58 AM, Thomas Trappenberg <tt at cs.dal.ca> wrote:
>>>>> 
>>>>>> James, enjoyed your writing.
>>>>>> So, what to do? We are trying to get organized in Canada and are thinking how we fit in with your (US) and the European approaches and big money. My thought is that our advantage might be flexibility by not having a single theme but rather a general supporting structure for theory and theory-experimental interactions. I believe the ultimate place where we want to be is to take theoretical proposals more seriously and try to make specific experiments for them; like the Higgs project. (Any other suggestions? Canadians, see http://www.neuroinfocomp.ca  if you are not already on there.)
>>>>>> Also, with regards to big data, I believe that one very fascinating thing about the brain is that it can function with 'small data'.
>>>>>> Cheers, Thomas
>>>>>> 
>>>>>> On 2014-01-25 12:09 AM, "james bower" <bower at uthscsa.edu> wrote:
>>>>>>> Ivan thanks for the response,
>>>>>>> 
>>>>>>> Actually, the talks at the recent Neuroscience Meeting about the Brain Project either excluded modeling altogether  -  or declared we in the US could leave it to the Europeans.  I am not in the least bit nationalistic - but, collecting data without having models (rather than imaginings) to indicate what to collect, is simply foolish, with many examples from history to demonstrate the foolishness.  In fact, one of the primary proponents (and likely beneficiaries) of this Brain Project, who gave the big talk at Neuroscience on the project (showing lots of pretty pictures), started his talk by asking: “what have we really learned since Cajal, except that there are also inhibitory neurons?”  Shocking, not only because Cajal actually suggested that there might be inhibitory neurons - in fact.  To quote “Stupid is as stupid does”.
>>>>>>> 
>>>>>>> Forbes magazine estimated that finding the Higgs Boson cost over $13BB, conservatively.  The Higgs experiment was absolutely the opposite of a Big Data experiment - In fact, can you imagine the amount of money and time that would have been required if one had simply decided to collect all data at all possible energy levels?   The Higgs experiment is all the more remarkable because it had the nearly unified support of the high energy physics community, not that there weren’t and aren’t skeptics, but still, remarkable that the large majority could agree on the undertaking and effort.  The reason is, of course, that there was a theory - that dealt with the particulars and the details - not generalities.  In contrast, there is a GREAT DEAL of skepticism (me included) about the Brain Project - its politics and its effects (or lack therefore), within neuroscience.  (of course, many people are burring their concerns in favor of tin cups - hoping).  Neuroscience has had genome envy for ever - the connectome is their response - who says its all in the connections? (sorry ‘connectionists’)  Where is the theory?  Hebb?  You should read Hebb if you haven’t - rather remarkable treatise.  But very far from a theory.
>>>>>>> 
>>>>>>> If you want an honest answer to your question - I have not seen any good evidence so far that the approach works, and I deeply suspect that the nervous system is very much NOT like any machine we have built or designed to date. I don’t believe that Newton would have accomplished what he did, had he not, first, been a remarkable experimentalist, tinkering with real things.  I feel the same way about Neuroscience.  Having spent almost 30 years building realistic models of its cells and networks (and also doing experiments, as described in the article I linked to) we have made some small progress - but only by avoiding abstractions and paying attention to the details.  OF course, most experimentalists and even most modelers have paid little or no attention.  We have a sociological and structural problem that, in my opinion, only the right kind of models can fix, coupled with a real commitment to the biology - in all its complexity.  And, as the model I linked tries to make clear - we also have to all agree to start working on common “community models’.  But like big horn sheep, much safer to stand on your own peak and make a lot of noise.  
>>>>>>> 
>>>>>>> You can predict with great accuracy the movement of the planets in the sky using circles linked to other circles - nice and easy math, and very adaptable model (just add more circles when you need more accuracy, and invent entities like equant points, etc).  Problem is, without getting into the nasty math and reality of ellipses- you can’t possible know anything about gravity, or the origins of the solar system, or its various and eventual perturbations.  
>>>>>>> 
>>>>>>> As I have been saying for 30 years:  Beware Ptolemy and curve fitting.
>>>>>>> 
>>>>>>> The details of reality matter.
>>>>>>> 
>>>>>>> Jim
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> On Jan 24, 2014, at 7:02 PM, Ivan Raikov <ivan.g.raikov at gmail.com> wrote:
>>>>>>> 
>>>>>>>> 
>>>>>>>> I think perhaps the objection to the Big Data approach is that it is applied to the exclusion of all other modelling approaches. While it is true that complete and detailed understanding of  neurophysiology and anatomy is at the heart of neuroscience, a lot can be learned about signal propagation in excitable branching structures using statistical physics, and a lot can be learned about information representation and transmission in the brain using mathematical theories about distributed communicating processes. As these modelling approaches have been successfully used in various areas of science, wouldn't you agree that they can also be used to understand at least some of the fundamental properties of brain structures and processes? 
>>>>>>>> 
>>>>>>>>   -Ivan Raikov
>>>>>>>> 
>>>>>>>> On Sat, Jan 25, 2014 at 8:31 AM, james bower <bower at uthscsa.edu> wrote:
>>>>>>>>> [snip] 
>>>>>>>>> An enormous amount of engineering and neuroscience continues to think that the feedforward pathway is from the sensors to the inside - rather than seeing this as the actual feedback loop.  Might to some sound like a semantic quibble,  but I assure you it is not.
>>>>>>>>> 
>>>>>>>>> If you believe as I do, that the brain solves very hard problems, in very sophisticated ways, that involve, in some sense the construction of complex models about the world and how it operates in the world, and that those models are manifest in the complex architecture of the brain - then simplified solutions are missing the point.
>>>>>>>>> 
>>>>>>>>> What that means inevitably, in my view, is that the only way we will ever understand what brain-like is, is to pay tremendous attention experimentally and in our models to the actual detailed anatomy and physiology of the brains circuits and cells.
>>>>>>>>> 
>>>>>>> 
>>>>>>>  
>>>>>>>  
>>>>>>> Dr. James M. Bower Ph.D.
>>>>>>> Professor of Computational Neurobiology
>>>>>>> Barshop Institute for Longevity and Aging Studies.
>>>>>>> 15355 Lambda Drive
>>>>>>> University of Texas Health Science Center 
>>>>>>> San Antonio, Texas  78245
>>>>>>>  
>>>>>>> Phone:  210 382 0553
>>>>>>> Email: bower at uthscsa.edu
>>>>>>> Web: http://www.bower-lab.org
>>>>>>> twitter: superid101
>>>>>>> linkedin: Jim Bower
>>>>>>>  
>>>>>>> CONFIDENTIAL NOTICE:
>>>>>>> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or
>>>>>>> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be
>>>>>>> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately.
>>>>>>>  
>>>>>>> 
>>>>> 
>>>>>  
>>>>>  
>>>>> Dr. James M. Bower Ph.D.
>>>>> Professor of Computational Neurobiology
>>>>> Barshop Institute for Longevity and Aging Studies.
>>>>> 15355 Lambda Drive
>>>>> University of Texas Health Science Center 
>>>>> San Antonio, Texas  78245
>>>>>  
>>>>> Phone:  210 382 0553
>>>>> Email: bower at uthscsa.edu
>>>>> Web: http://www.bower-lab.org
>>>>> twitter: superid101
>>>>> linkedin: Jim Bower
>>>>>  
>>>>> CONFIDENTIAL NOTICE:
>>>>> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or
>>>>> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be
>>>>> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately.
>>>>>  
>>>>> 
>>>> 
>>>> 
>>>> 
>>>> -- 
>>>> Brad Wyble
>>>> Assistant Professor
>>>> Psychology Department
>>>> Penn State University
>>>> 
>>>> http://wyblelab.com
>>> 
>> 
>>  
>> 
>>  
>> 
>> Dr. James M. Bower Ph.D.
>> 
>> Professor of Computational Neurobiology
>> 
>> Barshop Institute for Longevity and Aging Studies.
>> 
>> 15355 Lambda Drive
>> 
>> University of Texas Health Science Center 
>> 
>> San Antonio, Texas  78245
>> 
>>  
>> Phone:  210 382 0553
>> 
>> Email: bower at uthscsa.edu
>> 
>> Web: http://www.bower-lab.org
>> 
>> twitter: superid101
>> 
>> linkedin: Jim Bower
>> 
>>  
>> CONFIDENTIAL NOTICE:
>> 
>> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or
>> 
>> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be
>> 
>> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately.
>> 
>>  
>> 

 

 

Dr. James M. Bower Ph.D.

Professor of Computational Neurobiology

Barshop Institute for Longevity and Aging Studies.

15355 Lambda Drive

University of Texas Health Science Center 

San Antonio, Texas  78245

 

Phone:  210 382 0553

Email: bower at uthscsa.edu

Web: http://www.bower-lab.org

twitter: superid101

linkedin: Jim Bower

 

CONFIDENTIAL NOTICE:

The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or

any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be

immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately.

 


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20140127/e964a69c/attachment.html>


More information about the Connectionists mailing list