Paper on Dynamical Sytems, RNNs and Computation Available

Mike Casey mcasey at volen.brandeis.edu
Thu Mar 7 07:50:02 EST 1996



Dear connectionists,


The following paper deals with the connection between computational 
and dynamical descriptions of systems and the analysis of recurrent 
neural networks in particular.  It has been accepted for publication 
in Neural Computation, and will appear in Vol. 8, number 6 later this 
year.

Comments are welcome.

----------------------------------------------------------------------

The Dynamics of Discrete-Time Computation, With Application to 
Recurrent Neural Networks and Finite State Machine Extraction  [77 pages]

                            Mike Casey
              Volen Center for Complex Systems Studies
                        Brandeis University
                        Waltham, MA  02254


                To appear in Neural Computation 8:6. 

ABSTRACT:

Recurrent neural networks (RNNs) can learn to perform finite
state computations.  It is shown that an RNN performing a finite 
state computation must organize its state space to mimic the 
states in the minimal deterministic finite state machine (DFA) 
which can perform that computation, and a precise description of 
the attractor structure of such systems is given.  This 
knowledge effectively predicts activation space dynamics, 
which allows one to understand RNN computation dynamics in 
spite of complexity of activation dynamics.  As a corollary of
our main theorem, we prove that the only formal languages which
RNNs are able to robustly recognize are those recognizable by
DFA (i.e. the regular languages).  By elucidating the necessary 
and sufficient dynamical properties which an RNN must possess 
in order to perform a DFA computation, we provide a framework 
for discussing the relationship between symbolic (algorithmic, 
finite state) and subsymbolic (dynamic, continuous phase space) 
aspects of computation in physical systems.

This theory also provides a theoretical framework for understanding 
finite state machine extraction techniques and can be used to 
improve training methods for RNNs performing DFA computations.  
This provides an example of a successful top-down approach to 
understanding a general class of complex systems that have not been 
explicitly designed, e.g. systems that have evolved or learned their 
internal structure.


----------------------------------------------------------------------

This paper is available via the WWW at
http://eliza.cc.brandeis.edu/people/mcasey/papers.html

or 

ftp://eliza.cc.brandeis.edu/pub/mcasey/mcasey_nc.ps
ftp://eliza.cc.brandeis.edu/pub/mcasey/mcasey_nc.ps.gz
ftp://eliza.cc.brandeis.edu/pub/mcasey/mcasey_nc.ps.Z


FTP INSTRUCTIONS
unix% ftp eliza.cc.brandeis.edu (or 129.64.55.200)
Name: anonymous
Password: (use your e-mail address)
ftp> cd /pub/mcasey/
ftp> bin
ftp> get mcasey_nc.ps.Z (or mcasey_nc.ps.gz or mcasey_nc.ps)
ftp> bye
unix% uncompress mcasey_nc.ps.Z (or gzip -d mcasey_nc.ps.gz)


Please send comments to 
Mike Casey
Volen Center for Complex Systems Studies
Brandeis University
Waltham, MA  02254
email:  mcasey at volen.brandeis.edu
http://eliza.cc.brandeis.edu/people/mcasey



More information about the Connectionists mailing list