Connectionist Learning - Some New Ideas

Kevin Cherkauer cherkaue at cs.wisc.edu
Thu May 16 15:26:37 EDT 1996


In a recent thought-provoking posting to the connectionist list, Asim Roy
<ATAXR at asuvm.inre.asu.edu> said:

>We have recently published a set of principles for learning in neural
>networks/connectionist models that is different from classical
>connectionist learning (Neural Networks, Vol. 8, No. 2; IEEE
>Transactions on Neural Networks, to appear; 

..

>E.      Generalization in Learning: The method must be able to
>generalize reasonably well so that only a small amount of network
>resources is used. That is, it must try to design the smallest possible
>net, although it might not be able to do so every time. This must be
>an explicit part of the algorithm. This property is based on the
>notion that the brain could not be wasteful of its limited resources,
>so it must be trying to design the smallest possible net for every
>task.


I disagree with this point. According to Hertz, Krogh, and Palmer (1991, p. 2),
the human brain contains about 10^11 neurons. (They also state on p. 3 that
"the axon of a typical neuron makes a few thousand synapses with other
neurons," so we're looking at on the order of 10^14 "connections" in the
brain.) Note that a period of 100 years contains only about 3x10^9 seconds.
Thus, if you lived 100 years and learned continuously at a constant rate every
second of your life, your brain would be at liberty to "use up" the capacity of
about 30 neurons (and 30,000 connections) per second. I would guess this is a
very conservative bound, because most of us probably spend quite a bit of time
where we aren't learning at such a furious rate. But even using this
conservative bound, I calculate that I'm allowed to use up about 2.7x10^6
neurons (and 2.7x10^9 connections) today.

I'll try not to spend them all in one place. :-)

Dr. Roy's suggestion that the brain must try "to design the smallest possible
net for every task" because "the brain could not be wasteful of its limited
resources" is unlikely, in my opinion. It seems to me that the brain has
rather an abundance of neurons. On the other hand, finding optimal solutions to
many interesting "real-world" problems is often very hard computationally. I am
not a complexity theorist, but I will hazard to suggest that a constraint on
neural systems to be optimal or near-optimal in their space usage is probably
both impossible to realize and, in fact, unnecessary.

Wild speculation: the brain may have so many neurons precisely so that it can
afford to be suboptimal in its storage usage in order to avoid computational
time intractability.


References

  Hertz, J.; Krogh, A.; & Palmer, R.G. 1991. Introduction to the Theory of
    Neural Computation. Redwood City, CA:Addison-Wesley.

  Roy, A., Govil, S. & Miranda, R. 1995. A Neural Network
    Learning Theory and a Polynomial Time RBF Algorithm. IEEE
    Transactions on Neural Networks, to appear.

  Roy, A., Govil, S. & Miranda, R. 1995. An Algorithm to
    Generate Radial Basis Function (RBF)-like Nets for Classification
    Problems. Neural Networks, Vol. 8, No. 2, pp. 179-202.


===============================================================================

Kevin Cherkauer
cherkauer at cs.wisc.edu


More information about the Connectionists mailing list