Modelling of Nonlinear Systems

Kenji Doya doya at crayfish.UCSD.EDU
Thu Mar 18 19:45:28 EST 1993


As Dr. Chen says, the fact that there exists a recurrent network that
models any given dynamical system [1] does not mean that it can be
achieved readily by learning, such as output error gradient descent.

This may sound similar to the case of learning parity in feed-forward
networks, but there are some additional problems that arise from
nonlinear dynamics of the network, which I tried to discuss in
another paper I posted in Neuroprose (Bifurcations of ...).

Takens' result shows that an n-dimensional attractor dynamics can be
reconstructed from its scalar output sequence x(t) as (for example)
	x(t) = F( x(t-1),...,x(t-m))
for m > 2n.

Therefore, a conservative connectionist approach to modeling nonlinear
dynamics is to prepare a long enough tapped delay line in the input
layer and then to train a feed-forward network to simulate the
function F. But it may not be the best approach because the same
system can look very simple or complex depending on how we take the
state vectors. Whether a recurrent network can find an efficient
representation of the state space by learning is still an open problem.

Another problem is the stability of the reconstructed trajectories.
In many cases, the training set consists of specific trajectories like
fixed points and limit cycles and no information is explicitly given
about how the nearby trajectories should behave [2]. It has been shown
empirically that fixed points and "simple" limit cycles
(e.g. sinusoids) tend to be stable, presumably by virtue of squashing
functions. However, that is not true for complex trajectories. Since
we know that the target trajectories are sampled from attractors
(otherwise we can't observe it), we should somehow impose this
constraint in training a network.

About on-line/off-line training:

What we want the network to do is to model a global, nonlinear vector
field. On-line learning is not attractive (to me) if the network
learns a local, (almost) linear vector filed quickly and forgets about
the rest of the state space.

[1] Dr. Sontag have sent me a paper:
H.T. Siegelmann and E.D. Sontag: Some recent results on computing with
"neural nets". IEEE Conf. on Decision and Control, Tucson, Dec.  1992.
It includes a more formal proof of the universality of recurrent networks.

[2] In a recent Neuroprose paper, Tsung and Cottrell explicitly
taught the network where the trajectories around a limit cycle should
go.

Kenji Doya     <doya at crayfish.ucsd.edu>
Department of Biology, University of California, San Diego
La Jolla, CA 92093-0322, USA
Phone: (619)534-3954/5548   Fax: (619)534-0301



More information about the Connectionists mailing list