Computational capabilities of recurrent NARX neural networks

Lee Giles giles at research.nj.nec.com
Fri Mar 17 16:26:10 EST 1995




The following Technical Report is available via the University of Maryland 
Department of Computer Science and the NEC Research Institute archives:

_____________________________________________________________________________



      Computational capabilities of recurrent NARX neural networks

  UNIVERSITY OF MARYLAND TECHNICAL REPORT UMIACS-TR-95-12 AND CS-TR-3408

           H. T. Siegelmann[1], B. G. Horne[2], C. L. Giles[2,3]  
 [1] Dept. of Information Systems Engineering, Technion, Haifa 32000, Israel
    [2] NEC Research Institute, 4 Independence Way, Princeton, NJ  08540
         [3] UMIACS, University of Maryland, College Park, MD 20742
          
                        iehava at ie.technion.ac.il
                   {horne,giles}@research.nj.nec.com
                
Recently, fully connected recurrent neural networks have been proven
to be computationally rich --- at least as powerful as Turing
machines.  This work focuses on another network which is popular in
control applications and has been found to be very effective at
learning a variety of problems.  These networks are based upon
Nonlinear AutoRegressive models with eXogenous Inputs (NARX models),
and are therefore called {\em NARX networks}.  As opposed to other
recurrent networks, NARX networks have a limited feedback which comes
only from the output neuron rather than from hidden states.  They are
formalized by
\[
y(t) = \Psi
\left( \rule[-1ex]{0em}{3ex}
u(t-n_u), \ldots, u(t-1),
u(t), y(t-n_y), \ldots, y(t-1)
\right),
\]
where $u(t)$ and $y(t)$ represent input and output of the network at
time $t$, $n_u$ and $n_y$ are the input and output order, and the
function $\Psi$ is the mapping performed by a Multilayer Perceptron.
We constructively prove that the NARX networks with a finite number of
parameters are computationally as strong as fully connected recurrent
networks and thus Turing machines.  We conclude that in theory one can
use the NARX models, rather than conventional recurrent networks
without any computational loss even though their feedback is limited.
Furthermore, these results raise the issue of what amount of feedback
or recurrence is necessary for any network to be Turing equivalent and
what restrictions on feedback limit computational power.

-------------------------------------------------------------------------

http://www.neci.nj.nec.com/homepages/giles.html
http://www.cs.umd.edu/TRs/TR-no-abs.html

or

ftp://ftp.nj.nec.com/pub/giles/papers/NARX.capabilities.ps.Z

--------------------------------------------------------------------------


--                                 
C. Lee Giles / NEC Research Institute / 4 Independence Way
Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482
URL  http://www.neci.nj.nec.com/homepages/giles.html
==





More information about the Connectionists mailing list