papers available
Daniel Crespin(UCV
dcrespin at euler.ciens.ucv.ve
Tue Jun 6 06:52:25 EDT 2006
The preprints abstracted below could be of interest. To obtain the
preprints use a WWW browser and go to
http://euler.ciens.ucv.ve/Professors/dcrespin/Pub/
[1] Neural Network Formalism: Neural networks are defined using only
elementary concepts from set theory, without the usual connectionistic
graphs. The typical neural diagrams are derived from these definitions.
This approach provides mathematical techniques and insight to develop
theory and applications of neural networks.
[2] Generalized Backpropagation: Global backpropagation formulas for
differentiable neural networks are considered from the viewpoint of
minimization of the quadratic error using the gradient method. The gradient
of (the quadratic error function of) a processing unit is expressed in
terms of the output error and the transposed derivative of the unit with
respect to the weight. The gradient of the layer is the product of the
gradients of the processing units. The gradient of the network equals the
product of the gradients of the layers. Backpropagation provides the
desired outputs or targets for the layers. Standard formulas for semilinear
networks are deduced as a special case.
[3] Geometry of Perceptrons: It is proved that perceptron networks are
products of characteristic maps of polyhedra. This gives insight into the
geometric structure of these networks. The result also holds for more
general (algebraic, etc.) perceptron networks, and suggests a new technique
to solve pattern recognition problems.
[4] Neural Polyhedra: Explicit formulas to realize any polyhedron as a
three layer perceptron neural network. Useful to calculate directly and
without training the architecture and weights of a network that executes a
given pattern recognition task.
[5] Pattern Recognition with Untrained Perceptrons: Gives algorithms to
construct polyhedra directly from given pattern recognition data. The
perceptron network associated to these polyhedra (see preprint above)
solves the recognition problem proposed. No network training is necessary.
Daniel Crespin
More information about the Connectionists
mailing list