Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory

james bower bower at uthscsa.edu
Tue Feb 11 10:27:15 EST 2014


With respect to big data, attention and vision.

Of course we collect a lot of data - however, it is precisely my point that we ‘point’ our receptors towards the data we want based on what we already think we know is out there.  “Attention” as I generally hear it discussed, I think, doesn't have enough of the sense that we are seeking data we expect.

Of course, in the laboratory, our monkeys are often given tasks in some random presentation order, so that they can’t predict, making the data presentation better controlled and the results probably easier to interpret.  In the real world, neural reaction to unexpected stimuli doesn’t seem to me to involve very high level processing at all - duck and cover, then run.

The mogul runs in the freestyle last night were very precisely layed out - and it is remarkable the intensity with which, in mid flight, they stare at the ground.  They aren’t calculating on the fly (literally in this case), they are collecting data from what they already know is there.  Training in that case isn’t learning the way that my sense is most think about it - it’s more fine tuning the expectation system.  At least that’s how I think of it.

It was telling that in the downhill, the US skier who had been killing the course in practice, attributed his failure in the final run to the fact that the light had changed - he couldn’t get the data in the form he expected.

Jim



On Feb 11, 2014, at 4:34 AM, Gary Cottrell <gary at eng.ucsd.edu> wrote:

> Oh, and I forgot to mention, this is just visual information, obviously. Compare this to the 5-8 syllables per second we get (depending on language, but information rate seems to be about the same across languages - relative to Vietnamese (Pellegrino et al. 2011). So this is about double the samples of fixations, per second, but we aren't always listening to speech. But for those who listen to rap, Eminem comes in at about 10 syllables per second, but he is topped by Outsider, at 21 syllables per second. 
> 
> g.
> 
> 
> François Pellegrino, Christophe Coupé, Egidio Marsico (2011) Across-Language Perspective on Speech Information Rate.
> Language, 87(3):539-558.K | 10.1353/lan.2011.0057
> 
> A cross-language perspective on speech information rate
> 
> François Pellegrino, Christophe Coupé and Egidio Marsico 
> 
> 
> On Feb 11, 2014, at 11:22 AM, Gary Cottrell <gary at eng.ucsd.edu> wrote:
> 
>> interesting points, jim!
>> 
>> I wonder, though, why you worry so much about "big data"?
>> 
>> I think it is more like "appropriate-sized data." we have never before been able to give our models anything like the kind of data we get in our first years of life. Let's do a little back-of-the-envelope on this. We saccade about 3 times a second, which, if you are awake 16 hours a day (make that 20 for Terry Sejnowski), come out to about 172,800 fixations per day, or high-dimensional samples of the world, if you like. One year of that, not counting drunken blackouts, etc., is 63 million samples. After 10 years that's 630 million samples. This dwarfs imagenet, at least the 1.2 million images used by Krizhevsky et al. Of course, there is a lot of redundancy here (spatial and temporal), which I believe the brain uses to construct its models (e.g., by some sort of learning rule like Földiàk's), so maybe 1.2 million isn't so bad. 
>> 
>> On the other hand, you may argue, imagenet is nothing like the real world - it is, after all, pictures taken by humans, so objects tend to be centered. This leads to a comment about you worrying about filtering data to avoid the big data "problem." Well, I would suggest that there is a lot of work on attention (some of it completely compatible with connectionist models, e.g., Itti, et al. 1998, Zhang, et al., 2008) that would cause a system to focus on objects, just as photographers do. So, it isn't like we haven't worried about that as you do, it's just that we've done something about it! ;-)
>> 
>> Anyway, I like your ideas about the cerebellum - sounds like there are a bunch of Ph.D. theses in there…
>> 
>> cheers,
>> gary
>> 
>> 
>> Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20, 1254–1259.
>> 
>> Zhang, Lingyun, Tong, Matthew H., Marks, Tim K., Shan, Honghao, and Cottrell, Garrison W. (2008). SUN: A Bayesian Framework for Saliency Using Natural Statistics. Journal of Vision 8(7):32, 1-20.
>> The code for SUN is here
>> On Feb 10, 2014, at 10:04 PM, james bower <bower at uthscsa.edu> wrote:
>> 
>>> One other point that some of you might find interesting.
>>> 
>>> While most neurobiologists and text books describe the cerebellum as involved in motor control, I suspect that it is actually not a motor control device in the usual sense at all.  We proposed 20+ years ago that the cerebellum is actually a sensory control device, using the motor system (although not only) to precisely position sensory surfaces to collect the data the nervous system actually needs and expects.  In the context of the current discussion about big data - such a mechanism would also contribute to  the nervous system’s working around a potential data problem.  
>>> 
>>> Leaping and jumping forward, as an extension of this idea, we have proposed that autism may actually be an adaptive response to cerebellar dysfunction - and therefore a response to uncontrolled big data flux (in your terms).
>>> 
>>> So, if correct, the brain adapts to being confronted with badly controlled data acquisition by shutting it off.
>>> 
>>> Just to think about.  Again, papers available for anyone interested.  
>>> 
>>> Given how much we do know about cerebellar circuitry - this could actually be an interesting opportunity for some cross disciplinary thinking about how one would use an active sensory data acquisition controller to select the sensory data that is ideal given an internal model of the world.  Almost all of the NN type cerebellar models to date have been built around either the idea that the cerebellum is a motor timing device, or involved in learning (yadda yadda).  
>>> 
>>> Perhaps most on this list interested in brain networks don’t know that far and away, the pathway with the largest total number of axons is the pathway from the (entire) cerebral cortex to the cerebellum.  We have predicted that this pathway is the mechanism by which the cerebral cortex “loads” the cerebellum with knowledge about what it expects and needs.
>>> 
>>> 
>>> 
>>> Jim
>>> 
>>> 
>>> 
>>> 
>>> 
>>> On Feb 10, 2014, at 2:24 PM, james bower <bower at uthscsa.edu> wrote:
>>> 
>>>> Excellent points - reminds me again of the conversation years ago about whether a general structure like a Hopfield Network would, by itself, solve a large number of problems.  All evidence from the nervous system points in the direction of strong influence of the nature of the problem on the solution, which also seems consistent with what has happened in real world applications of NN to engineering over the last 25 years.
>>>> 
>>>> For biology, however, the interesting (even fundamental) question becomes, what the following actually are:
>>>> 
>>>>> endowed us with custom tools for learning in different domains
>>>> 
>>>>> the contribution from evolution to neural wetware might be
>>>> 
>>>> I have mentioned previously, that my guess (and surprise) based on our own work over the last 30 years in olfaction is that ‘learning’ may all together be over emphasized (we do love free will).  Yes, in our laboratories we place animals in situations where they have to  “learn”, but my suspicion is that in the real world where brains actually operate and evolve, most of what we do is actually “recognition’ that involves matching external stimuli to internal ‘models’ of what we expect to be there.  I think that it is quite likely that that  ‘deep knowledge’ is how evolution has most patterned neural wetware.  Seems to me a way to avoid NP problems and the pitfalls of dealing with “big data” which as I have said, I suspect the nervous system avoids at all costs.
>>>> 
>>>> I have mentioned that we have reason to believe (to my surprise) that, starting with the olfactory receptors, the olfactory system already “knows” about the metabolic structure of the real world.  Accordingly, we are predicting that its receptors aren’t organized to collect a lot of data about the chemical structure of the stimulus (the way a chemist would), but instead looks for chemical signatures of metabolic processes.  e.g. , it may be that 1/3 or more of mouse olfactory receptors detect one of the three molecules that are produced by many different kinds of fruit when ripe.  “Learning” in olfaction, might be some small additional mechanism you put on top to change the ‘hedonic’ value of the stimulus - ie. you can ‘learn’ to like fermented fish paste.  But it is very likely that recognizing the (usually deleterious) environmental signature of fermentation is "hard wired”, requiring ‘learning’ to change the natural category.   
>>>> 
>>>> I know that many cognitive types (and philosophers as well) have developed much more nuanced discussions of these questions - however, I have always been struck by how much of the effort in NNs is focused on ‘learning’ as if it is the primary attribute of the nervous system we are trying to figure out.  It seems to me figuring out  "what the nose already knows” is much more important.
>>>> 
>>>> 
>>>> Jim
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> On Feb 10, 2014, at 10:38 AM, Gary Marcus <gary.marcus at nyu.edu> wrote:
>>>> 
>>>>> Juergen and others,
>>>>> 
>>>>> I am with John on his two basic concerns, and think that your appeal to computational universality is a red herring; I cc the entire group because I think that these issues lay at the center of why many of the hardest problems in AI and neuroscience continue to lay outside of reach, despite in-principle proofs about computational universality. 
>>>>> 
>>>>> John’s basic points, which I have also made before (e.g. in my books The Algebraic Mind and The Birth of the Mind and in my periodic New Yorker posts) are two
>>>>> 
>>>>> a. It is unrealistic to expect that hierarchies of pattern recognizers will suffice for the full range of cognitive problems that humans (and strong AI systems) face. Deep learning, to take one example, excels at classification, but has thus far had relatively little to contribute to inference or natural language understanding.  Socher et al’s impressive CVG work, for instance, is parasitic on a traditional (symbolic) parser, not a soup-to-nuts neural net induced from input. 
>>>>> 
>>>>> b. it is unrealistic to expect that all the relevant information can be extracted by any general purpose learning device.
>>>>> 
>>>>> Yes, you can reliably map any arbitrary input-output relation onto a multilayer perceptron or recurrent net, but only if you know the complete input-output mapping in advance. Alas, you can’t be guaranteed to do that in general given arbitrary subsets of the complete space; in the real world, learners see subsets of possible data and have to make guesses about what the rest will be like. Wolpert’s No Free Lunch work is instructive here (and also in line with how cognitive scientists like Chomsky, Pinker, and myself have thought about the problem). For any problem, I presume that there exists an appropriately-configured net, but there is no guarantee that in the real world you are going to be able to correctly induce the right system via general-purpose learning algorithm given a finite amount of data, with a finite amount of training. Empirically, neural nets of roughly the form you are discussing have worked fine for some problems (e.g. backgammon) but been no match for their symbolic competitors in other domains (chess) and worked only as an adjunct rather than an central ingredient in still others (parsing, question-answering a la Watson, etc); in other domains, like planning and common-sense reasoning, there has been essentially no serious work at all.
>>>>> 
>>>>> My own take, informed by evolutionary and developmental biology, is that no single general purpose architecture will ever be a match for the endproduct of a billion years of evolution, which includes, I suspect, a significant amount of customized architecture that need not be induced anew in each generation.  We learn as well as we do precisely because evolution has preceded us, and endowed us with custom tools for learning in different domains. Until the field of neural nets more seriously engages in understanding what the contribution from evolution to neural wetware might be, I will remain pessimistic about the field’s prospects.
>>>>> 
>>>>> Best,
>>>>> Gary Marcus
>>>>> 
>>>>> Professor of Psychology
>>>>> New York University
>>>>> Visiting Cognitive Scientist
>>>>> Allen Institute for Brain Science
>>>>> Allen Institute for Artiificial Intelligence
>>>>> co-edited book coming late 2014:
>>>>> The Future of the Brain: Essays By The World’s Leading Neuroscientists
>>>>> http://garymarcus.com/
>>>>> 
>>>>> On Feb 10, 2014, at 10:26 AM, Juergen Schmidhuber <juergen at idsia.ch> wrote:
>>>>> 
>>>>>> John,
>>>>>> 
>>>>>> perhaps your view is a bit too pessimistic. Note that a single RNN already is a general computer. In principle, dynamic RNNs can map arbitrary observation sequences to arbitrary computable sequences of motoric actions and internal attention-directing operations, e.g., to process cluttered scenes, or to implement development (the examples you mentioned). From my point of view, the main question is how to exploit this universal potential through learning. A stack of dynamic RNN can sometimes facilitate this. What it learns can later be collapsed into a single RNN [3].
>>>>>> 
>>>>>> Juergen
>>>>>> 
>>>>>> http://www.idsia.ch/~juergen/whatsnew.html
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> On Feb 7, 2014, at 12:54 AM, Juyang Weng <weng at cse.msu.edu> wrote:
>>>>>> 
>>>>>>> Juergen:
>>>>>>> 
>>>>>>> You wrote: A stack of recurrent NN.  But it is a wrong architecture as far as the brain is concerned.
>>>>>>> 
>>>>>>> Although my joint work with Narendra Ahuja and Thomas S. Huang at UIUC was probably the first
>>>>>>> learning network that used the deep Learning idea for learning from clutter scenes (Cresceptron ICCV 1992 and IJCV 1997),
>>>>>>> I gave up this static deep learning idea later after we considered the Principle 1: Development.
>>>>>>> 
>>>>>>> The deep learning architecture is wrong for the brain.  It is too restricted, static in architecture, and cannot learn directly from cluttered scenes required by Principle 1.  The brain is not a cascade of recurrent NN.
>>>>>>> 
>>>>>>> I quote from Antonio Damasio "Decartes' Error": p. 93: "But intermediate communications occurs also via large subcortical nuclei such as those in the thalamas and basal ganglia, and via small nulei such as those in the brain stem."
>>>>>>> 
>>>>>>> Of course, the cerebral pathways themselves are not a stack of recurrent NN either.
>>>>>>> 
>>>>>>> There are many fundamental reasons for that.  I give only one here base on our DN brain model:  Looking at a human, the brain must dynamically attend the tip of the nose, the entire nose, the face, or the entire human body on the fly.  For example, when the network attend the nose, the entire human body becomes the background!  Without a brain network that has both shallow and deep connections (unlike your stack of recurrent NN), your network is only for recognizing a set of static patterns in a clean background.  This is still an overworked pattern recognition problem, not a vision problem.
>>>>>>> 
>>>>>>> -John
>>>>>>> 
>>>>>>> On 2/6/14 7:24 AM, Schmidhuber Juergen wrote:
>>>>>>>> Deep Learning in Artificial Neural Networks (NN) is about credit assignment across many subsequent computational stages, in deep or recurrent NN.
>>>>>>>> 
>>>>>>>> A popluar Deep Learning NN is the Deep Belief Network (2006) [1,2].  A stack of feedforward NN (FNN) is pre-trained in unsupervised fashion. This can facilitate subsequent supervised learning.
>>>>>>>> 
>>>>>>>> Let me re-advertise a much older, very similar, but more general, working Deep Learner of 1991. It can deal with temporal sequences: the Neural Hierarchical Temporal Memory or Neural History Compressor [3]. A stack of recurrent NN (RNN) is pre-trained in unsupervised fashion. This can greatly facilitate subsequent supervised learning.
>>>>>>>> 
>>>>>>>> The RNN stack is more general in the sense that it uses sequence-processing RNN instead of FNN with unchanging inputs. In the early 1990s, the system was able to learn many previously unlearnable Deep Learning tasks, one of them requiring credit assignment across 1200 successive computational stages [4].
>>>>>>>> 
>>>>>>>> Related developments: In the 1990s there was a trend from partially unsupervised [3] to fully supervised recurrent Deep Learners [5]. In recent years, there has been a similar trend from partially unsupervised to fully supervised systems. For example, several recent competition-winning and benchmark record-setting systems use supervised LSTM RNN stacks [6-9].
>>>>>>>> 
>>>>>>>> 
>>>>>>>> References:
>>>>>>>> 
>>>>>>>> [1] G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, Vol. 313. no. 5786, pp. 504 - 507, 2006. http://www.cs.toronto.edu/~hinton/science.pdf
>>>>>>>> 
>>>>>>>> [2] G. W. Cottrell. New Life for Neural Networks. Science, Vol. 313. no. 5786, pp. 454-455, 2006. http://www.academia.edu/155897/Cottrell_Garrison_W._2006_New_life_for_neural_networks
>>>>>>>> 
>>>>>>>> [3] J. Schmidhuber. Learning complex, extended sequences using the principle of history compression, Neural Computation, 4(2):234-242, 1992. (Based on TR FKI-148-91, 1991.)  ftp://ftp.idsia.ch/pub/juergen/chunker.pdf  Overview: http://www.idsia.ch/~juergen/firstdeeplearner.html
>>>>>>>> 
>>>>>>>> [4] J. Schmidhuber. Habilitation thesis, TUM, 1993. ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf . Includes an experiment with credit assignment across 1200 subsequent computational stages for a Neural Hierarchical Temporal Memory or History Compressor or RNN stack with unsupervised pre-training [2] (try Google Translate in your mother tongue): http://www.idsia.ch/~juergen/habilitation/node114.html
>>>>>>>> 
>>>>>>>> [5] S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 1997. Based on TR FKI-207-95, 1995.  ftp://ftp.idsia.ch/pub/juergen/lstm.pdf . Lots of of follow-up work on LSTM under http://www.idsia.ch/~juergen/rnn.html
>>>>>>>> 
>>>>>>>> [6] S. Fernandez, A. Graves, J. Schmidhuber. Sequence labelling in structured domains with hierarchical recurrent neural networks. In Proc. IJCAI'07, p. 774-779, Hyderabad, India, 2007.  ftp://ftp.idsia.ch/pub/juergen/IJCAI07sequence.pdf
>>>>>>>> 
>>>>>>>> [7] A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. NIPS'22, p 545-552, Vancouver, MIT Press, 2009.  http://www.idsia.ch/~juergen/nips2009.pdf
>>>>>>>> 
>>>>>>>> [8] 2009: First very deep (and recurrent) learner to win international competitions with secret test sets: deep LSTM RNN (1995-) won three connected handwriting contests at ICDAR 2009 (French, Arabic, Farsi), performing simultaneous segmentation and recognition.  http://www.idsia.ch/~juergen/handwriting.html
>>>>>>>> 
>>>>>>>> [9] A. Graves, A. Mohamed, G. E. Hinton. Speech Recognition with Deep Recurrent Neural Networks. ICASSP 2013, Vancouver, 2013.   http://www.cs.toronto.edu/~hinton/absps/RNN13.pdf
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> Juergen Schmidhuber
>>>>>>>> http://www.idsia.ch/~juergen/whatsnew.html
>>>>>>> 
>>>>>>> -- 
>>>>>>> --
>>>>>>> Juyang (John) Weng, Professor
>>>>>>> Department of Computer Science and Engineering
>>>>>>> MSU Cognitive Science Program and MSU Neuroscience Program
>>>>>>> 428 S Shaw Ln Rm 3115
>>>>>>> Michigan State University
>>>>>>> East Lansing, MI 48824 USA
>>>>>>> Tel: 517-353-4388
>>>>>>> Fax: 517-432-1061
>>>>>>> Email: weng at cse.msu.edu
>>>>>>> URL: http://www.cse.msu.edu/~weng/
>>>>>>> ----------------------------------------------
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>>>  
>>>> 
>>>>  
>>>> 
>>>> Dr. James M. Bower Ph.D.
>>>> 
>>>> Professor of Computational Neurobiology
>>>> 
>>>> Barshop Institute for Longevity and Aging Studies.
>>>> 
>>>> 15355 Lambda Drive
>>>> 
>>>> University of Texas Health Science Center 
>>>> 
>>>> San Antonio, Texas  78245
>>>> 
>>>>  
>>>> Phone:  210 382 0553
>>>> 
>>>> Email: bower at uthscsa.edu
>>>> 
>>>> Web: http://www.bower-lab.org
>>>> 
>>>> twitter: superid101
>>>> 
>>>> linkedin: Jim Bower
>>>> 
>>>>  
>>>> CONFIDENTIAL NOTICE:
>>>> 
>>>> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or
>>>> 
>>>> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be
>>>> 
>>>> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately.
>>>> 
>>>>  
>>>> 
>>> 
>>>  
>>> 
>>>  
>>> 
>>> Dr. James M. Bower Ph.D.
>>> 
>>> Professor of Computational Neurobiology
>>> 
>>> Barshop Institute for Longevity and Aging Studies.
>>> 
>>> 15355 Lambda Drive
>>> 
>>> University of Texas Health Science Center 
>>> 
>>> San Antonio, Texas  78245
>>> 
>>>  
>>> Phone:  210 382 0553
>>> 
>>> Email: bower at uthscsa.edu
>>> 
>>> Web: http://www.bower-lab.org
>>> 
>>> twitter: superid101
>>> 
>>> linkedin: Jim Bower
>>> 
>>>  
>>> CONFIDENTIAL NOTICE:
>>> 
>>> The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or
>>> 
>>> any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be
>>> 
>>> immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately.
>>> 
>>>  
>>> 
>> 
>> [I am in Dijon, France on sabbatical this year. To call me, Skype works best (gwcottrell), or dial +33 788319271]
>> 
>> Gary Cottrell 858-534-6640 FAX: 858-534-7029
>> 
>> My schedule is here: http://tinyurl.com/b7gxpwo
>> 
>> Computer Science and Engineering 0404
>> IF USING FED EX INCLUDE THE FOLLOWING LINE:      
>> CSE Building, Room 4130
>> University of California San Diego
>> 9500 Gilman Drive # 0404
>> La Jolla, Ca. 92093-0404
>> 
>> Things may come to those who wait, but only the things left by those who hustle. -- Abraham Lincoln
>> 
>> "I'll have a café mocha vodka valium latte to go, please" -Anonymous
>> 
>> "Of course, none of this will be easy. If it was, we would already know everything there was about how the brain works, and presumably my life would be simpler here. It could explain all kinds of things that go on in Washington." -Barack Obama
>> 
>> "Probably once or twice a week we are sitting at dinner and Richard says, 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. Bargmann said.
>> 
>> "A grapefruit is a lemon that saw an opportunity and took advantage of it." - note written on a door in Amsterdam on Lijnbaansgracht.
>> 
>> "Physical reality is great, but it has a lousy search function." -Matt Tong
>> 
>> "Only connect!" -E.M. Forster
>> 
>> "You always have to believe that tomorrow you might write the matlab program that solves everything - otherwise you never will." -Geoff Hinton
>> 
>> "There is nothing objective about objective functions" - Jay McClelland
>> 
>> "I am awaiting the day when people remember the fact that discovery does not work by deciding what you want and then discovering it."
>> -David Mermin
>> 
>> Email: gary at ucsd.edu
>> Home page: http://www-cse.ucsd.edu/~gary/
>> 
> 
> [I am in Dijon, France on sabbatical this year. To call me, Skype works best (gwcottrell), or dial +33 788319271]
> 
> Gary Cottrell 858-534-6640 FAX: 858-534-7029
> 
> My schedule is here: http://tinyurl.com/b7gxpwo
> 
> Computer Science and Engineering 0404
> IF USING FED EX INCLUDE THE FOLLOWING LINE:      
> CSE Building, Room 4130
> University of California San Diego
> 9500 Gilman Drive # 0404
> La Jolla, Ca. 92093-0404
> 
> Things may come to those who wait, but only the things left by those who hustle. -- Abraham Lincoln
> 
> "I'll have a café mocha vodka valium latte to go, please" -Anonymous
> 
> "Of course, none of this will be easy. If it was, we would already know everything there was about how the brain works, and presumably my life would be simpler here. It could explain all kinds of things that go on in Washington." -Barack Obama
> 
> "Probably once or twice a week we are sitting at dinner and Richard says, 'The cortex is hopeless,' and I say, 'That's why I work on the worm.'" Dr. Bargmann said.
> 
> "A grapefruit is a lemon that saw an opportunity and took advantage of it." - note written on a door in Amsterdam on Lijnbaansgracht.
> 
> "Physical reality is great, but it has a lousy search function." -Matt Tong
> 
> "Only connect!" -E.M. Forster
> 
> "You always have to believe that tomorrow you might write the matlab program that solves everything - otherwise you never will." -Geoff Hinton
> 
> "There is nothing objective about objective functions" - Jay McClelland
> 
> "I am awaiting the day when people remember the fact that discovery does not work by deciding what you want and then discovering it."
> -David Mermin
> 
> Email: gary at ucsd.edu
> Home page: http://www-cse.ucsd.edu/~gary/
> 

 

 

Dr. James M. Bower Ph.D.

Professor of Computational Neurobiology

Barshop Institute for Longevity and Aging Studies.

15355 Lambda Drive

University of Texas Health Science Center 

San Antonio, Texas  78245

 

Phone:  210 382 0553

Email: bower at uthscsa.edu

Web: http://www.bower-lab.org

twitter: superid101

linkedin: Jim Bower

 

CONFIDENTIAL NOTICE:

The contents of this email and any attachments to it may be privileged or contain privileged and confidential information. This information is only for the viewing or use of the intended recipient. If you have received this e-mail in error or are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of, or the taking of any action in reliance upon, any of the information contained in this e-mail, or

any of the attachments to this e-mail, is strictly prohibited and that this e-mail and all of the attachments to this e-mail, if any, must be

immediately returned to the sender or destroyed and, in either case, this e-mail and all attachments to this e-mail must be immediately deleted from your computer without making any copies hereof and any and all hard copies made must be destroyed. If you have received this e-mail in error, please notify the sender by e-mail immediately.

 


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20140211/e8dc7117/attachment.html>


More information about the Connectionists mailing list