Input/Output Data Convertion in BackProp.

Ali Minai aminai at thor.ece.uc.edu
Tue Mar 29 12:12:49 EST 1994


If you are getting good results without rescaling the input, you could
use linear output neurons to give you a corresponding dynamic range on
the output side. However, a more interesting issue might be to explain
the difference (if any) in prediction quality between the rescaled and
unscaled cases. Is it because the data has a strange distribution? For
example, if very small differences in the real data can lead to
significantly different consequences, rescaling might be losing important
information. Or you might just need to use a faster learning rate to make
up for smaller gradient magnitudes in the rescaled case.

If your 1-step predictions are good, you can use these to bootstrap
up to longer term ones. The simplest way is to feed back the predicted
output into the network input, but better results can probably be obtained
as follows: train a 1-step predictor and a 2-step predictor; configure
the 1-step predictor to produce 2-step predictions through re-iteration;
then combine the two 2-step predictors to produce an averaged/weighted
2-step prediction; iterate on this 2-step predictor to produce longer
term predictions. This method can be repeated for 4-step predictors using
a direct 4-step predictor and the twice-iterated configuration of the
2-step predictor described above. I'm sure many people must have used
similar methods (I have), but I refer you to an excellent paper by Tim
Sauer:

T. Sauer, "Time Series Prediction by Using Delay Coordinate Embedding",
in TIME SERIES PREDICTION, A.S. Weigend & N.A. Gershenfeld (eds.),
Addison-Wesley, 1994.

He mentions the method in the context of non-neural time series prediction,
but the applicability to neural net predictors is obvious.

Ali Minai



More information about the Connectionists mailing list