analogies

Jim Bower jbower at smaug.cns.caltech.edu
Thu Jan 3 14:49:30 EST 1991


I would like to know when I said that one had to be an experimental
neurobiologist to do computational neurobiology.  It is really unbelievable 
how defensive the reaction has been to my comments.  Just
for the record, Christof Koch, Carver Mead, Nancy Kopel, Bard
Ermentrout, Wilfrid Rall, Jack Cowan, Idan Segev, and many other
NON-experimentalists have made important contributions to computational 
neurobiology.  They have also invested a tremendous amount
of time educating themselves about the detailed structure of the
nervous system on their own and through interactions with experimental 
neurobiologists.  And I don't mean listening to invited talks
at neural net conferences.  It is absolutely bizarre that claims of
scientific relevance and biological inspiration are made and accepted 
by a field largely composed of people who know very little about
the nervous system, think that it can be ignored or regard it as simply 
one implementation alternative, and generally appear to have little 
real interest in the subject.   
 	Two other remarks.  The computer and the oscilloscope analogy is
a terrible one.   The point that Josh correctly made in the beginning
of his comment is that there is a rather poor mapping between neural 
network algorithms and digital computer architecture.  I think
that it can also be argued that a lot of the most interesting work in
neural networks is on the side of implementation, i.e. how one constructs 
hardware that reflects the algorithms of interest.  The brain
almost certainly has taken this association to an extreme which is
probably closely related to its spectacular efficiency and power.
The form reflects the function.  An electrode in a computer is a
mess because the computer is a relatively low level computing device 
that trades generality for efficiency.  The brain is a different
situation altogether.  For example, we increasingly suspect, based on
detailed computational modeling of brain circuits, that principle
computational features of a circuit are reflected at all its organizational 
levels.  That is, if a network oscillates at 40 Hz, that periodicity 
is seen at the network, single cell, and subcellular levels as
well as in the structure of the input and output.  That means that
sticking the electrode anywhere will reveal some aspect of what is
probably an important functional property of the network.  
 	Second, the standard particle in a box analogy mentioned by
Kastor is even worse.  Neurons can not be considered particles in a
box.  This even goes against fundamental assumptions underlying
connectionism.   This is one of, if not the most difficult problem associated 
with changing levels of abstraction when modeling the brain.  It also means
that the best examples of success in theoretical physics may not directly
apply to understanding the nervous system.  We will see.
 	Finally, with respect to the original AI / NN Connectionist debate.
I just received an advertisement from the periodical "AI Expert"
that offers with a subscription  "five disk-based versions of some
of the most important AI programs".  You guessed it, "Vector classifier, 
Adaline/Perceptron, Backpropagation, Outstar Network, and
Hopfield Network",  presented as "real world examples of working
AI systems".  I think that settles the issue, NNs etc has now become
part of AI marketing.  How much closer can you get.
 	
Jim Bower


More information about the Connectionists mailing list