AVAILABLE IN NEUROPROSE: "Uniqueness of weights for neural networks"

Eduardo Sontag sontag at control.rutgers.edu
Tue Mar 9 15:21:36 EST 1993


TITLE:   "Uniqueness of weights for neural networks"
AUTHORS:  Francesca Albertini, Eduardo D. Sontag, and Vincent Maillot

FILE: sontag.uniqueness.ps.Z

				ABSTRACT

This short paper surveys various results dealing with the weight-uniqueness
question for neural nets.  In essence, these results show that, under various
technical assumptions, neuron exchanges and sign flips are the only
transformations that (generically) leave the input/output behavior invariant.
An alternative proof is given of Sussmann's theorem (Neural Networks, 1992)
for single-hidden layer nets, and his result (for the standard logistic, or
equivalently tanh(x)) is generalized to a wide class of activations. 
Also, several theorems for recurrent nets are discussed.

(NOTE: The uniqueness theorem extends, with a simple proof, to single-hiden
layer nets which employ the Elliott/Georgiou/Koutsougeras/... activation:

               u
     s(u) = -------
            1 + |u|

This is not discussed in, and is not an immediate consequence of, the results
in the paper, but is an easy exercise.)

unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52)
Name : anonymous
Password: <your id>
ftp> cd pub/neuroprose
ftp> binary
ftp> get sontag.uniqueness.ps.Z
ftp> quit
unix> uncompress sontag.uniqueness.ps.Z
unix> lpr -Pps sontag.uniqueness.ps (or however you print PostScript)

(With many thanks to Jordan Pollack for providing this valuable service!)

Eduardo D. Sontag
Department of Mathematics
Rutgers Center for Systems and Control (SYCON)
Rutgers University
New Brunswick, NJ 08903, USA



More information about the Connectionists mailing list