Paper available: Function Approximation with Neural Networks and Local Methods

Steve Lawrence lawrence at s4.elec.uq.edu.au
Wed Jan 31 23:45:34 EST 1996


The following paper presents an overview of global MLP approximation
and local approximation. It is known that MLPs can respond poorly to
isolated data points and we demonstrate that considering histograms of
k-NN density estimates of the data can help in prior determination of
the best method.


http://www.elec.uq.edu.au/~lawrence		- Australia
http://www.neci.nj.nec.com/homepages/lawrence	- USA

We welcome your comments


  Function Approximation with Neural Networks and Local Methods: 
	        Bias, Variance and Smoothness

	 Steve Lawrence, Ah Chung Tsoi, Andrew Back

	     Electrical and Computer Engineering
     University of Queensland, St. Lucia 4072, Australia

 		         ABSTRACT

We review the use of global and local methods for estimating a
function mapping $\mathcal{R}\mathnormal{^m} \Rightarrow
\mathcal{R}\mathnormal{^n}$ from samples of the function containing
noise. The relationship between the methods is examined and an
empirical comparison is performed using the multi-layer perceptron
(MLP) global neural network model, the single nearest-neighbour model,
a linear local approximation (LA) model, and the following commonly
used datasets: the Mackey-Glass chaotic time series, the Sunspot time
series, British English Vowel data, TIMIT speech phonemes, building
energy prediction data, and the sonar dataset. We find that the simple
local approximation models often outperform the MLP.  No criteria such
as classification/prediction, size of the training set, dimensionality
of the training set, etc. can be used to distinguish whether the MLP
or the local approximation method will be superior. However, we find
that if we consider histograms of the $k$-NN density estimates for the
training datasets then we can choose the best performing method {\em a
priori} by selecting local approximation when the spread of the
density histogram is large and choosing the MLP otherwise. This result
correlates with the hypothesis that the global MLP model is less
appropriate when the characteristics of the function to be
approximated varies throughout the input space. We discuss the
results, the smoothness assumption often made in function
approximation, and the bias/variance dilemma.


More information about the Connectionists mailing list