Four Papers Available
nelsonde%avlab.dnet@aaunix.aa.wpafb.af.mil
nelsonde%avlab.dnet at aaunix.aa.wpafb.af.mil
Mon Nov 2 15:06:16 EST 1992
I N T E R O F F I C E M E M O R A N D U M
Date: 02-Nov-1992 02:56pm EST
From: DALE E. NELSON
NELSONDE
Dept: AAAT-1
Tel No: 57646
TO: Remote Addressee ( _AAUNIX::"CONNECTIONISTS at CS.CMU.EDU" )
Subject: Four Papers Available
********* DO NOT POST TO OTHER NETS *************
********* DO NOT POST TO OTHER NETS *************
Prediction of Chaotic Time Series Using Cascade Correlation:
Effects of Number of Inputs and Training Set Size
Dale E. Nelson
D. David Ensley
Maj Steven K. Rogers, PhD
ABSTRACT
Most neural networks have been used for problems of
classification. We have undertaken a study using neural networks
to predict continuous valued functions which are aperiodic or
chaotic. In addition, we are considering a relatively new class
of neural networks, ontogenic neural networks. Ontogenic neural
networks are networks which generate their own topology during
training. Cascade Correlation2 is one such network. In this
study we used the Cascade Correlation neural network to answer
two questions regarding prediction. First, how do the number of
inputs affect prediction accuracy. Second, how do the number of
training exemplars affect prediction accuracy. For these
experiments, the Mackey-Glass equation was used with a Tau value
of 17 which yields a correlation dimension of 2.1. Takens'
theorem7 for this data set states that the number of inputs to
obtain a smooth mapping should be 3 to 5. We were experimentally
able to verify this. Experiments were run varying the number of
training exemplars from 50 to 450. The results showed that there
is an overall trend towards lower predictive RMS error with a
greater number of exemplars. However, there are good results
obtained with only 50 exemplars which we are unable to explain at
this time. In addition to these results, we discovered that the
way in which predictive accuracy is generally represented, a
graph of Mackey-Glass with the network output superimposed, can
lead to erroneous conclusions!
This paper is NOT available from Neuroprose. For paper copies
send E-Mail with your mailing address to :
nelsonde%avlab.dnet%aa.wpafb.af.mil
DO NOT REPLY TO ENTIRE NETWORK...DO NOT USE REPLY MODE!
********* DO NOT POST TO OTHER NETS *************
********* DO NOT POST TO OTHER NETS *************
********* DO NOT POST TO OTHER NETS *************
********* DO NOT POST TO OTHER NETS *************
A Taxonomy of Neural Network Optimality
Dale E. Nelson
Maj Steven K. Rogers, PhD
ABSTRACT
One of the long-standing problems with neural networks is how to
decide on the correct topology for a given application. For many
years the accepted approach was to use heuristics to "get close",
then experiment to find the best topology. In recent years
methodologies like the Abductory Inference Mechanism (AIM) from
AbTech Corporation and Cascade Correlation from Carnegie Mellon
University have emerged. These ontogenic (topology synthesizing)
neural networks develop their topology by deciding when and what
kind of nodes to add to the network during the training phase.
Other methodologies examine the weights and try to "improve" by
pruning some of the weights. This paper discusses the criteria
which can be used to decide when one network topology is better
than another. The taxonomy presented in this paper can be used
to decide on methods for comparison of different neural network
paradigms. Since the criteria for determining what is an optimum
network is highly application specific, no attempt is made to
propose the one right criteria. This taxonomy is a necessary
step toward achieving robust ontogenic neural networks.
This paper is NOT available from Neuroprose. For paper copies
send E-Mail with your mailing address to :
nelsonde%avlab.dnet%aa.wpafb.af.mil
DO NOT REPLY TO ENTIRE NETWORK...DO NOT USE REPLY MODE!
********* DO NOT POST TO OTHER NETS *************
********* DO NOT POST TO OTHER NETS *************
********* DO NOT POST TO OTHER NETS *************
********* DO NOT POST TO OTHER NETS *************
APPLYING CASCADE CORRELATION TO THE EXTRAPOLATION OF CHAOTIC TIME
SERIES
David Ensley
Dale E. Nelson
ABSTRACT
Attempting to find near-optimal architectures, ontogenic neural
networks develop their own architectures as they train. As part
of a project entitled "Ontogenic Neural Networks for the
Prediction of Chaotic Time Series," this paper presents findings
of a ten-week research period on using the Cascade Correlation
ontogenic neural network to extrapolate (predict) a chaotic time
series generated from the Mackey-Glass equation. Truer, more
informative measures of extrapolation accuracy than currently
popular measures are presented. The effects of some network
parameters on extrapolation accuracy were investigated.
Sinusoidal activation functions turned out to be best for our
data set. The best range for sigmoidal activation functions was
[-1, +1]. One experiment demonstrates that extrapolation
accuracy can be maximized by selecting the proper number of
training exemplars. Though surprisingly good extrapolations have
been obtained, there remain pitfalls. These pitfalls are
discussed along with possible methods for avoiding them.
This paper is NOT available from Neuroprose. For paper copies
send E-Mail with your mailing address to :
nelsonde%avlab.dnet%aa.wpafb.af.mil
DO NOT REPLY TO ENTIRE NETWORK...DO NOT USE REPLY MODE!
********* DO NOT POST TO OTHER NETS *************
********* DO NOT POST TO OTHER NETS *************
********* DO NOT POST TO OTHER NETS *************
********* DO NOT POST TO OTHER NETS *************
APPLYING THE ABDUCTORY INDUCTION MECHANISM (AIM) TO THE
EXTRAPOLATION OF CHAOTIC TIME SERIES
Dennis S. Buck
Dale E. Nelson
ABSTRACT
This paper presents research done as part of a large effort to
develop ontogenic (topology synthesizing) neural networks. One
commerically available product, considered an ontogenic neural
network, is the Abductory Induction Mechanism (AIM) program from
AbTech Corporation of Charlottesville, Virginia. AIM creates a
polynomial neural network of the third order during training.
The methodology will discard any inputs it finds having a low
relevance to predicting the training output. The depth and
complexity of the network is controlled by a user-set Complexity
Penalty Multiplier (CPM). This paper presents results of using
AIM to predict the output of the Mackey-Glass equation.
Comparisons are made based on the RMS error for an iterated
prediction of 100 time steps beyond the training set. The data
set was developed using a Tau value of 17 which yields a
correlation dimension (an approximation of the fractal dimension)
of 2.1. We explored the effect of different CPM values and found
that a CPM value of 4.8 gives the best predictive results with
the least computational complexity. We also conducted
experiments using 2 to 10 inputs and 1 to 3 outputs. We found
that AIM chose to use only 2 or 3 inputs, due to its ability to
eliminate unnecessary inputs. This leads to the conclusion that
Takens' theorem cannot be experimentally verified by this
methodology! Our experiments showed that using 2 or 3 outputs,
thus forcing the network to learn the first and second derivative
of the equation, produced the best predictive results. We also
discovered that the final network produced a predictive RMS error
lower than the Cascade Correlation method with far less
computational time.
This paper is NOT available from Neuroprose. For paper copies
send E-Mail with your mailing address to :
nelsonde%avlab.dnet%aa.wpafb.af.mil
DO NOT REPLY TO ENTIRE NETWORK...DO NOT USE REPLY MODE!
********* DO NOT POST TO OTHER NETS *************
********* DO NOT POST TO OTHER NETS *************
More information about the Connectionists
mailing list