<div dir="ltr">Hi Thomas, thanks for your feedback.<div><br></div><div>I agree with you that we will have to choose a deliberately incorrect model. I see that as emerging from the idea that you can't actually compute the Kolmogorov complexity, you can merely approximate it. This means you will have to use heuristics and make sacrifices in your model. You will overcompress and undercompress and overfit and underfit in various parts of the space.</div>
<div><br></div><div>It seems though that this problem can be ameliorated by Big Data. The more data we collect, the more useful constraints we apply to the problem. In the limit of Big Data, our model is not underspecified at all, but rather falls perfectly within the normal distribution of human beings.</div>
<div><br></div><div>On the way to this utopia, the choice of deliberately incorrect model is going to be a very hard problem. For example, given the simplest possible model, it may be impossible to choose which of the possible bits of complexity we should add on any given step when following the Ockham gradient during our hill climb. This means that, right from the start, we are already stuck on some local maximum.</div>
<div><br></div><div>Still, there are so many useful constraints, and so much relevant data, that I don't see why, without modeling a human on the Planck scale, we can't create a digital human that falls within the normal range. And given that we have a normal range, it also seems as though we have some leeway with regards to the model - i.e., we don't have to get it exactly right - we can substantially compress the model and it will still be a normal human, despite the fact that there are lots of different ways to compress it. </div>
<div><br></div><div>SO, ultimately, my position is Big Data all the way :) The more constraints the merrier - we don't actually have to satisfy them all, but the more we have available the easier it will be to hit the target.</div>
<div><br></div><div><div style="color:rgb(80,0,80);font-family:arial,sans-serif;font-size:13px"><p class="MsoNormal">Brian Mingus<u></u><u></u></p></div><div style="color:rgb(80,0,80);font-family:arial,sans-serif;font-size:13px">
<p class="MsoNormal"><u></u> <u></u></p></div><div style="color:rgb(80,0,80);font-family:arial,sans-serif;font-size:13px"><p class="MsoNormal">Graduate student<u></u><u></u></p></div><div style="color:rgb(80,0,80);font-family:arial,sans-serif;font-size:13px">
<p class="MsoNormal">Department of Psychology and Neuroscience<u></u><u></u></p></div><div style="color:rgb(80,0,80);font-family:arial,sans-serif;font-size:13px"><p class="MsoNormal">University of Colorado at Boulder<u></u><u></u></p>
</div><div style="color:rgb(80,0,80);font-family:arial,sans-serif;font-size:13px"><p class="MsoNormal"><a href="http://grey.colorado.edu/mingus" target="_blank">http://grey.colorado.edu/mingus</a></p></div></div></div><div class="gmail_extra">
<br><br><div class="gmail_quote">On Sun, Jan 26, 2014 at 1:11 PM, Thomas G. Dietterich <span dir="ltr"><<a href="mailto:tgd@eecs.oregonstate.edu" target="_blank">tgd@eecs.oregonstate.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div lang="EN-US" link="blue" vlink="purple"><div><p class="MsoNormal"><span style="font-size:10.0pt;font-family:"Courier New","serif";color:#1f497d">Dear Brian,<u></u><u></u></span></p><p class="MsoNormal">
<span style="font-size:10.0pt;font-family:"Courier New","serif";color:#1f497d"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:10.0pt;font-family:"Courier New","serif";color:#1f497d">Please keep in mind that MDL, Ockham’s razor, PCA, and similar regularization approaches focus on the problem of *<b>prediction</b>* (or, equivalently, compression). Given a fixed amount of data and a flexible class of models, these principles tell us how to modulate the expressiveness of the model to maximize predictive accuracy. I would characterize it as follows: “Which deliberately incorrect model should we adopt in order to optimize predictive accuracy?”<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:10.0pt;font-family:"Courier New","serif";color:#1f497d"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:10.0pt;font-family:"Courier New","serif";color:#1f497d">One stance toward creating an AI system is to pursue this purely functional approach and model a person as an input-output mapping (with latent state variables, as appropriate). Such an approach might be very useful both for engineering and for science. From a scientific perspective, it would tell us that if we build a system with certain properties, it can exhibit this input-output behavior.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:10.0pt;font-family:"Courier New","serif";color:#1f497d"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:10.0pt;font-family:"Courier New","serif";color:#1f497d">But it would not be a satisfactory theory of neuroscience for two reasons. First, it only provides sufficient conditions but does not show they are necessary. There might be other ways of producing the behavior, and the brain might implement one of those instead. Second, even if it could be made into a necessary and sufficient condition (e.g., by proving that all systems lacking certain properties would NOT exhibit the desired behavior), it would still not explain how the chemistry and biology of the brain produces the required properties. To fall back on the old bird vs. airplane analogy, the accomplishments of the Wright brothers (and the field of aerodynamics) provided a theory of how flight could be achieved. But we are still learning at the biological level how birds actually do it.<u></u><u></u></span></p>
<div class="im"><p class="MsoNormal"><span style="font-size:10.0pt;font-family:"Courier New","serif";color:#1f497d"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:10.0pt;font-family:"Courier New","serif";color:#1f497d"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:10.0pt;font-family:"Courier New","serif";color:#1f497d"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:10.0pt;font-family:"Courier New","serif";color:#1f497d">-- <u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:10.0pt;font-family:"Courier New","serif";color:#1f497d">Thomas G. Dietterich, Distinguished Professor Voice: <a href="tel:541-737-5559" value="+15417375559" target="_blank">541-737-5559</a><u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:10.0pt;font-family:"Courier New","serif";color:#1f497d">School of Electrical Engineering FAX: <a href="tel:541-737-1300" value="+15417371300" target="_blank">541-737-1300</a><u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:10.0pt;font-family:"Courier New","serif";color:#1f497d"> and Computer Science URL: <a href="http://eecs.oregonstate.edu/~tgd" target="_blank">eecs.oregonstate.edu/~tgd</a><u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:10.0pt;font-family:"Courier New","serif";color:#1f497d">US Mail: 1148 Kelley Engineering Center <u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:10.0pt;font-family:"Courier New","serif";color:#1f497d">Office: 2067 Kelley Engineering Center<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:10.0pt;font-family:"Courier New","serif";color:#1f497d">Oregon State Univ., Corvallis, OR 97331-5501<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:10.0pt;font-family:"Courier New","serif";color:#1f497d"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:10.0pt;font-family:"Courier New","serif";color:#1f497d"><u></u> <u></u></span></p></div><p class="MsoNormal"><b><span style="font-size:10.0pt;font-family:"Tahoma","sans-serif"">From:</span></b><span style="font-size:10.0pt;font-family:"Tahoma","sans-serif""> Connectionists [mailto:<a href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu" target="_blank">connectionists-bounces@mailman.srv.cs.cmu.edu</a>] <b>On Behalf Of </b>Brian J Mingus<br>
<b>Sent:</b> Saturday, January 25, 2014 8:23 PM<br><b>To:</b> Brad Wyble<br><b>Cc:</b> <a href="mailto:connectionists@mailman.srv.cs.cmu.edu" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a></span></p><div class="im">
<br><b>Subject:</b> Re: Connectionists: Brain-like computing fanfare and big data fanfare<u></u><u></u></div><p></p><p class="MsoNormal"><u></u> <u></u></p><div><div><p class="MsoNormal">Hi Brad et al., - thanks very much for this fun and entertaining philosophical discussion:)<u></u><u></u></p>
</div><div><div class="h5"><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">With regards to turtles all the way down, and also with regards to choosing the appropriate level of analysis for modeling, I'd like to reiterate a position I made earlier but in which I didn't really provide enough detail.<u></u><u></u></p>
</div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">There exists a formalization of Ockham's razor in a field called Algorithmic Information Theory, and this formalization is the Minimum Description Length (MDL). <u></u><u></u></p>
</div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">This perspective essentially says that we are searching for the optimal compression of all of the data relating to the brain. This means that we don't want to overcompress relevant distinctions, but we don't want to undercompress redundancies. This optimal compression, when represented as a computer program that outputs all of the brain data (aka a model), has a description length known as the Kolmogorov complexity. <u></u><u></u></p>
</div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">Now there is something weird about what I have just described, which is that the resulting model will produce not just the data for a single brain, but the data for <i>every</i> brain - a kind of meta-brain. And this is not quite what we are looking for. And due to the turtles problem it is probably ill-posed, in that the length of the description may be infinite as we zoom in to finer levels of detail.<u></u><u></u></p>
</div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">So we need to provide some relevant constraints on the problem to make it tractable. Based on what I just described, the MDL for your brain <i>is</i> your brain. This is essentially because we haven't defined a utility function, and we haven't done that because we aren't quite sure what exactly it is we are doing, or what we are looking for, when modeling the brain.<u></u><u></u></p>
</div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">To begin fixing this problem, we can rotate this perspective into a tool that we are all probably familiar with - factor analysis, i.e., PCA. What we are essentially looking for, first and foremost, is a model that explains the first principle component of just one person's comprehensive brain dataset (which includes behavioral data). Then we want to study this component (which is tantamount to a model of the brain) and see what it can do.<u></u><u></u></p>
</div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">What will this first principle component look like? Now we need to define what exactly it is that we are after. I would argue that our model should be composed of neuron-like elements connected in networks, and that when we look at the statistical properties of these networks, they should be quite similar to what we see in humans. <u></u><u></u></p>
</div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">Most importantly, however, I would argue that this model, when raised as a human, should exhibit some distinctly human traits. It should not pass a trivial turing test, but rather a deep turing test. After having been raised as and with human beings, but not exposed to any substantial philosophy, this model should independently invent consciousness philosophy.<u></u><u></u></p>
</div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">As you might imagine, our abstract high level model brain which captures the first principle component of the brain data might not be able to do this. Thus, we will start adding in more components that explain more of the variance, iteratively increasing our description length. This is a distinctly top-down approach, in which we only add relevant detail as it becomes obvious that the current model just isn't quite human. <u></u><u></u></p>
</div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">This approach follows a scientific gradient advocated for by Ockham's razor, in that we start with the simplest description (brain model) that explains the most amount of variance, and gradually increase the size of the description until it finally reinvents consciousness philosophy and can live among humans.<u></u><u></u></p>
</div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">In my admittedly biased experience, the first appropriate level of analysis is approximately point-neuron deep neural network architectures. However, this might actually be too low level - we might want to start with even more abstract, modern day NIPS-level models, and confirm that, although they can behave like humans, they can't reinvent consciousness philosophy and are thus more akin to zombie-like automata. <u></u><u></u></p>
</div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">Of course, with sufficient computing power our modeling approach can be somewhat more sloppy - we can begin experimenting with the synthesis of different levels of analysis right away.<u></u><u></u></p>
</div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">However, before we do any of this "for real" we probably want to comprehensively discuss the ethics of raising beings that are ultimately similar to humans, but are not quite human, and further, the ethics of raising digital humans. <u></u><u></u></p>
</div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">Lastly, to touch back to the original topic - Big Data - I think it's clear that the more data we have, the merrier. However, it also makes sense to follow the Ockham gradient. Ultimately, we are really just not as close to creating a human being as it may seem, and so it is probably safe, for the time being, to collect data from all levels of analysis willy nilly. However, when it comes time to actually build the human, we should be more careful, for the sake of the being we create. Indeed, perhaps we should be <i>sure</i> that it will reinvent consciousness philosophy before we ever turn it on in the first place.<u></u><u></u></p>
</div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">If anyone has an idea of how to do that, I would be extremely interested to hear about it.<u></u><u></u></p></div><div><p class="MsoNormal">
<u></u> <u></u></p></div><div><p class="MsoNormal">Brian Mingus<u></u><u></u></p></div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">Graduate student<u></u><u></u></p></div><div><p class="MsoNormal">
Department of Psychology and Neuroscience<u></u><u></u></p></div><div><p class="MsoNormal">University of Colorado at Boulder<u></u><u></u></p></div><div><p class="MsoNormal"><a href="http://grey.colorado.edu/mingus" target="_blank">http://grey.colorado.edu/mingus</a><u></u><u></u></p>
</div></div></div></div><div><div class="h5"><div><p class="MsoNormal" style="margin-bottom:12.0pt"><u></u> <u></u></p><div><p class="MsoNormal">On Sat, Jan 25, 2014 at 7:52 PM, Brad Wyble <<a href="mailto:bwyble@gmail.com" target="_blank">bwyble@gmail.com</a>> wrote:<u></u><u></u></p>
<div><div><p class="MsoNormal">Jim, <u></u><u></u></p></div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">Great debate! There are several good points here..<u></u><u></u></p></div><div><p class="MsoNormal">
<u></u> <u></u></p></div><div><p class="MsoNormal">First, I agree with you that models with tidy, analytical solutions are probably not the ultimate answer, as biology is unlikely to exhibit behavior that coincides with mathematical formalisms that are easy to represent in equations. In fact, I think that seeking such solutions can get in the way of progress in some cases. <u></u><u></u></p>
</div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">I also agree with you that community models are a good idea, and I am not advocating that everyone should build their own model. But I think that we need a hierarchy of such community models at multiple levels of abstraction, with clear ways of translating ideas and constraints from each level to the next. The goal of computational neuroscience is not to build the ultimate model, but to build a shared understanding in the minds of the entire body of neuroscientists with a minimum of communication failures. <u></u><u></u></p>
</div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">Next, I think that you're espousing a purely bottom-up approach to modelling the brain. ( i.e. that if we just build it, understanding will follow from the emergent dynamics). I very much admire your strong position, but I really can't agree with it. I return to the question of how we will even know what the bottom floor is in such an approach You seem to imply in previous emails that it's a channel/cable model, but someone else might argue that we'd have to represent interactions at the atomic level to truly capture the dynamics of the circuit. So if that's the only place to start, how will we ever make serious progress? The computational requirements to simulate even a single neuron at the atomic level on a super cluster is probably a decade away. And once we'd accomplished that, someone might point out a case in which subatomic interactions play a functional role in the neuron and then we've got to wait another 10 years to be able to model a single neuron again? <u></u><u></u></p>
</div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">To me, it really looks like turtles all the way down which means that we have to choose our levels of abstraction with an understanding that there are important dynamics at lower levels that will be missed. However if we build in constraints from the behavior of the system, such abstract models can nevertheless provide a foothold for climbing a bit higher in our understanding. <u></u><u></u></p>
</div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">Is there some reason that you think channels are a sufficient level of detail? (or maybe I've mischaracterized your position)<u></u><u></u></p>
</div><div><p class="MsoNormal"><span style="color:#888888"><u></u> <u></u></span></p></div><div><p class="MsoNormal"><span style="color:#888888">-Brad<u></u><u></u></span></p></div><div><p class="MsoNormal"><span style="color:#888888"><u></u> <u></u></span></p>
</div><div><p class="MsoNormal"><span style="color:#888888"><u></u> <u></u></span></p></div><div><p class="MsoNormal"><span style="color:#888888"><u></u> <u></u></span></p></div></div><div><div><div><p class="MsoNormal" style="margin-bottom:12.0pt">
<u></u> <u></u></p><div><p class="MsoNormal">On Sat, Jan 25, 2014 at 7:09 PM, james bower <<a href="mailto:bower@uthscsa.edu" target="_blank">bower@uthscsa.edu</a>> wrote:<u></u><u></u></p><div><p class="MsoNormal">
About to sign off here - as have probably already taken too much bandwidth. (although it has been a long time)<u></u><u></u></p><div><p class="MsoNormal"><u></u> <u></u></p><div><p class="MsoNormal">But just for final clarity on the point about physics - I am not claiming that the actual tools etc, developed by physics mostly to study non-biological and mostly ‘simpler’ systems (for example, systems were the elements (unlike neurons) aren’t ‘individualized’ - and therefore can be subjected to a certain amount of averaging (ie. thermodynamics), will apply.<u></u><u></u></p>
</div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">But I am suggesting (all be it in an oversimplified way) that the transition from a largely folkloric, philosophically (religiously) driven style of physics, to the physics of today was accomplished in the 15th century by the rejection of the curve fitting, ‘simplified’ and self reflective Ptolemic model of the solar system. (not actually, it turns out for that reason, but because the Ptolemaic model has become too complex and impure - the famous equint point). Instead, Newton, Kepler, etc, further developed a model that actually valued the physical structure of that system, independent of the philosophical, self reflecting previous set of assumptions. I know, I know that this is an oversimplified description of what happened, but, it is very likely that Newtons early (age 19) discovery of what approximated the least squares law in the ‘realistic model’ he had constructed of the earth moon system (where it was no problem and pretty clearly evident that the moon orbited the earth in a regular way), lead in later years to his development of mechanics - which, clearly provided an important "community model” of the sort we completely lack in neuroscience and seem to me continue to try to avoid. <u></u><u></u></p>
</div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">I have offered for years to buy the beer at the CNS meeting if all the laboratories describing yet another model of the hippocampus or the visual cortex would get together to agree on a single model they would all work on. No takers yet. The paper I linked to in my first post describes how that has happened for the Cerebellar Purkinje cell, because of GENESIS and because we didn’t block others from using the model, even to criticize us. However, when I sent that paper recently to a computational neuroscience I heard was getting into Purkinje cell modeling, he wrote back to say he was developing his own model thank you very much.<u></u><u></u></p>
</div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">The proposal that we all be free to build our own models - and everyone is welcome, is EXACTLY the wrong direction. <u></u><u></u></p></div>
<div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">We need more than calculous - and although I understand their attractiveness believe me, models that can be solved in close formed solutions are not likely to be particularly useful in biology, where the averaging won’t work in the same way. The relationship between scales is different, lots of things are different - which means the a lot of the tools will have to be different too. And I even agree that some of the tools developed by engineering, where one is actually trying to make things that work, might end up being useful, or even perhaps more useful. However, the transition to paradigmatic science I believe will critically depend on the acceptance of community models (they are the ‘paradigm’), and the models most likely with the most persuasive force as well as the ones mostly likelihood of revealing unexpected functional relationships, are ones that FIRST account for the structure of the brain, and SECOND are used to explore function (rather than what is usually the other way around).<u></u><u></u></p>
</div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">As described in the paper I posted, that is exactly what has happened through long hard work (since 1989) using the Purkinje cell model.<u></u><u></u></p>
</div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">In the end, unless you are a duelist (which I suspect many actually are, in effect), brain computation involves nothing beyond the nervous system and its physical and physiological structure. Therefore, that structure will be the ultimate reference for how things really work, no matter what level of scale you seek to describe.<u></u><u></u></p>
</div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">From 30 years of effort, I believe even more firmly now than I did back then, that, like Newton and his friends, this is where we should start - figuring out the principles and behavior from the physics of the elements themselves.<u></u><u></u></p>
</div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">You can claim it is impossible - you can claim that models at other levels of abstraction can help, however, in the end ‘the truth’ lies in the circuitry in all its complexity. But you can’t just jump into the complexity, without a synergistic link to models that actually provide insights at the detailed level of the data you seek to collect. <u></u><u></u></p>
</div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">IMHO.<u></u><u></u></p></div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">Jim<u></u><u></u></p></div><div>
<p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">(no ps)<u></u><u></u></p></div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">
<u></u> <u></u></p></div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><div><div><p class="MsoNormal"><u></u> <u></u></p>
<div><div><p class="MsoNormal">On Jan 25, 2014, at 4:44 PM, Dan Goodman <<a href="mailto:dg.connectionists@thesamovar.net" target="_blank">dg.connectionists@thesamovar.net</a>> wrote:<u></u><u></u></p></div><p class="MsoNormal">
<br><br><u></u><u></u></p><p class="MsoNormal">The comparison with physics is an interesting one, but we have to remember that neuroscience isn't physics. For a start, neuroscience is clearly much harder than physics in many ways. Linear and separable phenomena are much harder to find in neuroscience, and so both analysing and modelling data is much more difficult. Experimentally, it is much more difficult to control for independent variables in addition to the difficulty of working with living animals.<br>
<br>So although we might be able to learn things from the history of physics - and I tend to agree with Axel Hutt that one of those lessons is to use the simplest possible model rather than trying to include all the biophysical details we know to exist - while neuroscience is in its pre-paradigmatic phase (agreed with Jim Bower on this) I would say we need to try a diverse set of methodological approaches and see what wins. In terms of funding agencies, I think the best thing they could do would be to not insist on any one methodological approach to the exclusion of others.<br>
<br>I also share doubts about the idea that if we collect enough data then interesting results will just pop out. On the other hand, there are some valid hypotheses about brain function that require the collection of large amounts of data. Personally, I think that we need to understand the coordinated behaviour of many neurons to understand how information is encoded and processed in the brain. At present, it's hard to look at enough neurons simultaneously to be very sure of finding this sort of coordinated activity, and this is one of the things that the HBP and BRAIN initiative are aiming at.<br>
<br>Dan<u></u><u></u></p></div><p class="MsoNormal"><u></u> <u></u></p></div></div><div><div><p class="MsoNormal"> <u></u><u></u></p><p class="MsoNormal"> <u></u><u></u></p><p class="MsoNormal"><span style="font-size:14.0pt;font-family:"Verdana","sans-serif";color:#950004">Dr. James M. Bower Ph.D.</span><u></u><u></u></p>
<div><p class="MsoNormal"><span style="font-size:11.0pt">Professor of Computational Neurobiology</span><u></u><u></u></p><p class="MsoNormal"><span style="color:#4200a8">Barshop Institute for Longevity and Aging Studies.</span><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:11.0pt">15355 Lambda Drive</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11.0pt">University of Texas Health Science Center </span><u></u><u></u></p><p class="MsoNormal">
<span style="font-size:11.0pt">San Antonio, Texas 78245</span><u></u><u></u></p><p class="MsoNormal"> <u></u><u></u></p><p class="MsoNormal"><b><span style="font-size:13.0pt">Phone: <a href="tel:210%20382%200553" target="_blank">210 382 0553</a></span></b><u></u><u></u></p>
<p class="MsoNormal">Email: <a href="mailto:bower@uthscsa.edu" target="_blank">bower@uthscsa.edu</a><u></u><u></u></p><p class="MsoNormal">Web: <a href="http://www.bower-lab.org" target="_blank">http://www.bower-lab.org</a><u></u><u></u></p>
<p class="MsoNormal">twitter: superid101<u></u><u></u></p><p class="MsoNormal">linkedin: Jim Bower<u></u><u></u></p><p class="MsoNormal"> <u></u><u></u></p><p class="MsoNormal"><span style="color:#b15913">CONFIDENTIAL NOTICE:</span><u></u><u></u></p>
<p class="MsoNormal"><span style="color:#b15913">The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or</span><u></u><u></u></p>
<p class="MsoNormal"><span style="color:#b15913">any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be</span><u></u><u></u></p><p class="MsoNormal">
<span style="color:#b15913">immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately.</span><u></u><u></u></p>
<p class="MsoNormal"> <u></u><u></u></p></div></div></div><p class="MsoNormal"><u></u> <u></u></p></div></div></div></div><p class="MsoNormal"><br><br clear="all"><u></u><u></u></p><div><p class="MsoNormal"><u></u> <u></u></p>
</div></div></div><div><p class="MsoNormal">-- <u></u><u></u></p><div><p class="MsoNormal">Brad Wyble<br>Assistant Professor<br>Psychology Department<br>Penn State University<u></u><u></u></p><div><p class="MsoNormal"><u></u> <u></u></p>
</div><div><p class="MsoNormal"><a href="http://wyblelab.com" target="_blank">http://wyblelab.com</a><u></u><u></u></p></div></div></div></div></div><p class="MsoNormal"><u></u> <u></u></p></div></div></div></div></div></blockquote>
</div><br></div>