preprints available

Christian Omlin omlin at waterbug.cs.sun.ac.za
Mon Nov 16 08:29:55 EST 1998




The technical reports below are available from my homepage at


       http://www.cs.sun.ac.za/people/staff/omlin.html.


Comments are welcome. My apologies if you receive multiple copies of
this announcement.

Best regards,

Christian


Christian W. Omlin                 e-mail: omlin at cs.sun.ac.za
Department of Computer Science     phone (direct): +27-21-808-4932
University of Stellenbosch         phone (secretary): +27-21-808-4232
Private Bag X1                     fax: +27-21-808-4416
Stellenbosch 7602                  http://www.cs.sun.ac.za/people/staff/omlin.html
SOUTH AFRICA                       http://www.neci.nj.nec.com/homepages/omlin 



------------------------------------------------------------------------------------


          Recurrent Neural Networks Learn Deterministic 
          Representations of Fuzzy Finite-State Automata


                C.W. Omlin$^a$, C.L. Giles$^{b,c}$

               $^a~$Department of Computer Science
                   University of Stellenbosch
                7600 Stellenbosch South Africa

                  $^b~$NEC Research Institute
                      4 Independence Way
                    Princeton, NJ 08540 USA

                        $^c~$UMIACS
                University of of Maryland
                College Park, MD 20742 USA


                         ABSTRACT


The  paradigm  of  deterministic finite-state automata (DFAs) and
their corresponding regular languages have been shown to be  very
useful for addressing fundamental issues in recurrent neural net-
works. The issues that have been addressed include knowledge rep-
resentation,  extraction,  and  refinement as well development of
advanced learning algorithms.  Recurrent neural networks are also
very  promising  tool  for  modeling  discrete  dynamical systems
through learning, particularly when partial  prior  knowledge  is
available.   The drawback of the DFA paradigm is that it is inap-
propriate for modeling vague or uncertain dynamics; however, many
real-world applications deal with vague or uncertain information.
One way to model vague information in a dynamical  system  is  to
allow  for  vague  state  transitions, i.e.  the system may be in
several states at the same time with varying degree of certainty;
fuzzy  finite-state  automata  (FFAs)  are a formal equivalent of
such systems. It is therefore of interest to study how uncertain-
ty  in the form of FFAs can be modeled by deterministic recurrent
neural networks.  We have previously proven that second-order re-
current  neural  networks are able to represent FFAs, i.e. recur-
rent networks can be constructed that assign fuzzy memberships to
input  strings  with  arbitrary  accuracy.  In such networks, the
classification performance is independent of the  string  length.
In  this  paper,  we are concerned with recurrent neural networks
that have been trained to behave like FFAs. In particular, we are
interested  in  the  internal  representation of fuzzy states and
state transitions and in the extraction of knowledge in  symbolic
form.


------------------------------------------------------------------------------------


            Equivalence in Knowledge Representation:           
            Automata, Recurrent Neural Networks, and
                      Dynamical Fuzzy Systems   


     C.W. Omlin$^a$, C.L. Giles$^{b,c}$, K.K. Thornber $^b$
       
               $^a~$Department of Computer Science 
                   University of Stellenbosch
                7600 Stellenbosch South Africa

                  $^b~$NEC Research Institute
                      4 Independence Way
                    Princeton, NJ 08540 USA

                        $^c~$UMIACS 
                University of of Maryland
                College Park, MD 20742 USA


			 ABSTRACT


Neuro-fuzzy systems - the combination of artificial  neural  net-
works  with fuzzy logic - are becoming increasingly popular. How-
ever, neuro-fuzzy systems need to be extended for %are often  in-
adequate  for  applications  which require context (e.g., speech,
handwriting, control). Some of these applications can be  modeled
in  the form of finite-state automata.  Previously, it was proved
that deterministic finite-state automata  (DFAs)  can  be  stably
synthesized or mapped into second-order recurrent neural networks
with sigmoidal discriminant functions and sparse  interconnection
topology  by  programming  the networks' weights to $+H$ or $-H$.
Based on those results, this paper proposes  a  synthesis  method
for  mapping  fuzzy  finite-state  automata (FFAs) into recurrent
neural networks which is suitable  for  implementation  in  VLSI,
i.e.  the encoding of FFAs is a generalization of the encoding of
DFAs.  The synthesis method requires FFAs to undergo a  transfor-
mation  prior to being mapped into recurrent networks. Their neu-
rons have a slightly enriched functionality in order to  accommo-
date  a fuzzy representation of FFA states, i.e. any state can be
occupied with a fuzzy membership that  takes  on  values  in  the
range  $[0,  1]$  and several fuzzy states can be occupied at any
given  time.  [ This is in  contrast  to stochastic  finite-state
automata  where  there  exists no ambiguity about which is an au-
tomaton's current state.  The automaton can only  be  in  exactly
one  state  at any given time and the choice of a successor state
is determined by some probability distribution. For a  discussion
of  the  relation  between probability and fuzziness, see for in-
stance] The enriched neuron functionality allows fuzzy parameters
of  FFAs  to  be directly represented as parameters of the neural
network.  In this paper we prove the stability of  fuzzy  finite-
state  dynamics of constructed neural networks  for finite values
of network weight $H$ and through simulations give empirical val-
idation of the proofs.


------------------------------------------------------------------------------------


                      Dynamic Adaptation of      
              Recurrent Neural Network Architectures
                   Guided By Symbolic Knowledge          


                C.W. Omlin$^a$, C.L. Giles$^{b,c}$

               $^a~$Department of Computer Science
                   University of Stellenbosch
                7600 Stellenbosch South Africa

                  $^b~$NEC Research Institute
                      4 Independence Way
                    Princeton, NJ 08540 USA

                        $^c~$UMIACS
                University of of Maryland
                College Park, MD 20742 USA


                         ABSTRACT


The  success  and the time needed to train neural networks with a
given learning algorithm depend on the learning task, the initial
conditions,  and the network architecture.  Particularly the num-
ber of hidden units in feedforward and recurrent neural  networks
is an important factor.  We propose a novel method for dynamical-
ly adapting the architecture of recurrent neural networks trained
to behave like deterministic finite-state automata (DFAs).  It is
different from other constructive approaches so far as our method
relies on algorithms for extracting and inserting symbolic knowl-
edge in the form of DFAs.  The network  architecture  (number  of
neurons  and  weight configuration) changes during training based
on the symbolic information extracted from undertrained networks.
We  successfully  trained recurrent networks to recognize strings
of a regular language accepted by a non-trivial, randomly  gener-
ated  deterministic finite-state automaton. Our empirical results
indicate that symbolically-driven network adaptation  results  in
networks that generalize better than networks trained without dy-
namic network adaptation or networks trained with standard dynam-
ic growing methods.






More information about the Connectionists mailing list