Details, who needs them??

Jim Bower jbower at cns.caltech.edu
Wed Nov 25 21:59:54 EST 1992


>I grow tired of defending the validity of models to biologists
> who do not seem satisfied with any model that does not capture 
>every last nuance of complexity or that does not explain every last 
>experimental finding. 


In response to this and several similar statements, I have to say that
in now many years of building biologically motivated neural models, 
and attending neural modeling meetings of all sorts,  I have never 
yet met such a neurobiologist.  I have of course met many who object 
to the kind of "brain-hype" that originally prompted my remarks. 

However, there can be no question that the issue of the level
of detail necessary to account for brain function, or to do
something really interesting with neural networks is a subject
of active debate.  With respect to neural networks, I would 
point out that this question has been around from the begining
of neural network research. Further,  not that long ago, many 
believed, and argued loudly that simple networks
could do everything.  Several of us said that if that were true,
the brain would be simple and because it is not, it is likely
that artificial networks would have to get more complex to do 
anything real or even very interesting.   
As we head to the NIPS meeting, it is fairly clear that the simple 
neural networks have not done very well evolutionarily.  
Further, the derivatives are clear.

With respect to the detail necessary to understand the brain, this
is also an area of active debate in the young field of computational
neuroscience.  However, from our own work and that of others, 
maybe it is time to make the statement that it appears as though
the details might matter a great deal.  For example, through the
interaction of realistic models and experimental work, we have 
recently stumbled across a regulatory mechanism in the olfactory 
cerebral cortex that may be involved in switching the network from 
a learning to a recall state.  If correct, this switching mechanism 
serves to reduce the possibility of the corruption of new memories 
with old memories.  

While it would be inappropriate to describe the 
results in this forum in detail, it turns out that the mechanism 
bears a resemblance to an approach used by Kohonen to avoid the 
same problem.  Further, when the more elaborate details of the 
biologically derived mechanism are placed in a Kohonen associative 
memory, the performance of the original Kohonen net is improved.  In 
this case, however, the connection to Kohonen's work was made only 
after we performed the biological experiments.  This is not because 
we did not know Kohonen's work, but because the basic mechanism 
was so unbiological that it would have made little sense to 
specifically look for it in the network.  The biological modeling now 
done, we can see that Kohonen's approach appears as a minimal 
implementation of a much more sophisticated, complicated, and
apparently more effective memory regulation mechanism.  

While it is not the common practice on this network or in this field 
to point out ones own shortcomings, it turns out that we did not 
know, prior to doing the realistic modeling, which biological details 
might matter the most.  Once they were discovered, it was fairly 
trivial to modify an abstract model to include them.  The point here 
is that only through paying close attention to the biological details 
was this mechanism discovered.  From this and a few other examples 
in the new and growing field of computational neuroscience, it may 
very well be that we will actually have to pay very close attention 
to the structure of the nervous system if we are going to learn 
anything new about how machines like the brain work.  I 
acknowledge that this may or may not be eventually relevant to 
neural networks and connectionism as I am yet to be convinced that 
these are particularly good models for whatever type of 
computational object the brain is.  However, if there is some 
connection, it might be necessary to have those interested in 
advancing the state of artificial networks seek more information 
about neurobiology than they can obtain at their favorite annual 
neural network meeting, from a basic neurobiology textbook, or from 
some "overview" published by some certified leader of the field.  
Who knows, it might even be necessary to learn how to use an 
electrode.  

Jim Bower



For those interested in a very general overview of the work 
described above, I have placed the following review article in 
neuroprose:

"The Modulation of Learning State in a Biological Associative 
Memory:  An in vitro, in vivo, and in computo  Study of Object 
Recognition in Mammalian Olfactory Cortex."  James M. Bower

To retrieve from neuroprose:

unix> ftp cheops.cis.ohio-state.edu
Name (cheops.cis.ohio-state.edu:becker): anonymous
Password: (use your email address)
ftp> cd pub/neuroprose
ftp> get bower.ACH.asci.Z 
200 PORT command successful.
150 Opening BINARY mode data connection for bower.ACH.asci.Z 
226 Transfer complete.
###### bytes received in ## seconds (## Kbytes/s)
ftp> quit




More information about the Connectionists mailing list