Saratchandran paper

Thomas H. Hildebrandt thildebr at aragorn.csee.lehigh.edu
Tue Jun 23 18:25:16 EDT 1992


In a correspondence paper published last July in the IEEE T. on Neural
Networks, there is a paper by P. Saratchandran which claims to provide
an algorithm for training of a multilayer feedforward neural network
using a dynamic programming method -- in which the weights on each
layer are adjusted only once, starting with the output layer and
proceeding to the layer nearest the inputs.[1]  This claim is not only
counterintuitive, it is false.

The author hides this fact from himself and from the reader by
defining the error to be minimized in any layer of the network as
being independent of the weights in preceding stages of the network.
For example, if $I_k$ is the error at the output of the $k$th layer,
it is given as a function of the input $y(k-1)$ to that layer, the
weights $w(k-1)$ of that layer, and the sets of ideal weights
$w^*(k)$, $w^*(k+1)$, ..., $w^*(n-1)$ on succeeding layers in the
network.  This makes the error to be minimized independent of the
weights $w(2)$, $w(3)$, ..., $w(k-2)$ on the first $k-1$ layers!

I trust that the fallacy is by now apparent.  The absence of
experimental data from this paper serves to strengthen my conviction
that this algorithm will not work in practice --- save where the
function of all but the last layer is trivial, i.e. where the output
of the network is {\bf truly independent} of the weights on the 
$n-2$ hidden layers.  (The input layer has no weights.)

				Thomas H. Hildebrandt
				Visiting Researcher
				EE & CS Department
				Lehigh University


[1] Saratchandran, P.: "Dynamic Programming Approach for Optimal
Weight Selection in Multilayer Neural Networks", IEEE T. on Neural
Networks, V.2, N.4, pp.465-467 (July 1991).



More information about the Connectionists mailing list