What have neural networks achieved?

Daniel Crespin(UCV dcrespin at euler.ciens.ucv.ve
Sat Aug 29 07:25:25 EDT 1998


	About "What have neural networks achieved?", here is a
condensed personal viewpoint, particularly about forward pass
perceptron neural networks. As you will see below, I expect this
e-mail to motivate not only thoughts but also certain concrete action.

	In order attain perspective, ask the following similar
question: "What have computers achieved?" and compare with answers to
the previous question.

	First came the birth of perceptrons. An elegant model for
nervous systems, it caught lots of attention. Just after Hitler, the
Holocaust and Hiroshima, the possibility of in-depth understanding of
the human brain and behaviour could not pass unnoticed.

	But a persuasive book *against* perceptrons was written, and
for some time they were left outside mainstream science.

	Then, backpropagation was created. A learning algorithm, a
paradigm, a source of projects. The field of neural networks was
(re)born. In the last analysis, backpropagation is just a special
mixture of the gradient method (GM) and the chain rule, inheriting all
the difficulties and shortcomings of the former.

	The classical picture of GM is: High dimensional landscapes
with hills, saddles and valleys, starting at a random point and moving
downhill towards a local minimum that one one does not know if it is a
global one. Or perhaps wandering away towards infinity. Or
unadvertedly jumping over the sought-after well. And then, to apply
backpropagation, the network architecture has to be defined in
advance, a task for which no efficient method has been
available. Hence the random starting weights, and the random topology,
or the "educated guess", or just the "guess". This means that lots of
gaps are left to be filled, which may be good or bad, depending on
projects and levels.

	Number crunching power has been a popular remedy, but the task
is rather complex and results are still not satisfactory. This is,
with considerable simplification, a possible sketch of the neuroscape
to this date.

	The rather limited (as compared with computers in general)
lists of successes previosly forwarded as answers to the Subject of
this e-mail debate gives a rather good picture of what NN's achieved.

	Imagine now a new, powerful insight into (forward pass
perceptron) neural networks. A whole new way to interpret and look at
the popular diagrams of dots, arrows and weights, that gives you a
useful and substantial picture of what a neural network is, what it
does, what can you expect from it. As soon as data are gathered, your
program creates a network and there you go. No more architecture or
weight guessing. No more tedious backpropagation. No more thousands of
"presentations".

	But wait. Why waste your time with hype? Not only the theory,
but the software itself is readily available. Go the following URL:

		http://euler.ciens.ucv.ve/~dcrespin/Pub
or
		http://150.185.69.150/~dcrespin/Pub

	Go there and download NEUROGON software. This is the action I
expect to motivate. It is free for academic purposes and for any other
non-profit use.

	The available version of NEUROGON can be greatly improved, but
even thisrather limited version, once it is tested and used by workers
on the field, could give rise to a much larger list of success stories
on neural networks.

	Regards

	Daniel Crespin


More information about the Connectionists mailing list