Connectionist Learning - Some New Ideas

Jonathan_Stein@comverse.com Jonathan_Stein at comverse.com
Fri May 17 17:42:44 EDT 1996


>
>I agree with this general idea, although I'm not sure that "computational
>time intractability" is necessarily the principal reason. There are a lot
>of good reasons for redundancy, overlap, and space "suboptimality", not the
>least of which is the marvellous ability at recovery that the brain
>manifests after both small injuries and larger ones that give pause even to
>experienced neurologists.
>

One needn't draw upon injuries to prove the point. One loses about 100,000
cortical neurons a day (about a percent of the original number every three
years) under normal conditions. This loss is apparently not significant
for brain function. This has been often called the strongest argument for
distributed processing in the brain. Compare this ability with the fact that
single conductor disconnection cause total system failure with high
probability in conventional computers.

Although certainly acknowledged by the pioneers of artificial neural
network techniques, very few networks designed and trained by present
techniques are anywhere near that robust. Studies carried out on the
Hopfield model of associative memory DO show graceful degradation of
memory capacity with synapse dilution under certain conditions (see eg.
DJ Amit's book "Attractor Neural Networks"). Synapse pruning has been 
applied to trained feedforward networks (eg. LeCun's "Optimal Brain Damage")
but requires retraining of the network.

JS






More information about the Connectionists mailing list