Mathematical Tractability of Neural Nets

Patrick Thomas thomasp at lan.informatik.tu-muenchen.dbp.de
Fri Feb 23 14:52:00 EST 1990


Date: Fri, 23 Feb 90 19:52:31 -0100



 All neural nets which prove to be mathematically tractable (convergence..etc)
seem to be too trivial or biologically remote in order to account for
cerebral phenomena. It may be nice to prove the approximation capabilities of
backprop or some kind of convergent behaviour exhibited by the ART networks.
But (except perhaps for the ART-3 architecture?) they don't really deal with
the complex interactions at synaptic levels already found by neurophysiologists
and especially the ART networks rely on a similiarity measure which may be
fundamentally inappropriate. 

 So whats the alternative ? Define and refine some rules concerning synaptic
interactions (of local AND global kind), think about some rules governing
signal integration by neuronal units and then let it run. What do you get
by this type of self-organizing net ? One thing for sure: mathematical
untractability. This is the Edelman-way (among others). 

 Which way should be followed by someone interested in brain phenomena and
not with neural nets from an engineering point of view ? Is it true that
all mathematically tractable neural net approaches are inadequate and that
an empirical/experimental stand should be taken ?

I would be grateful for comments on this.

Patrick

P.S.: The Bonhoeffer et al results showing the non-locality of synaptic
      amplification at least on the pre-synaptic side seem to fit 
      nicely with Edelmans "synapses-as-populations" approach. 




More information about the Connectionists mailing list