Preprint Available - Knowledge Extraction

Christian Omlin omlinc at research.nj.nec.com
Tue Jun 13 14:47:26 EDT 1995


The following technical report is available from the archive
of the Computer Science Department, University of Maryland.

URL: http://www.cs.umd.edu:80/TR/UMCP-CSD:CS-TR-3465
FTP: ftp.cs.umd.edu:/pub/papers/papers/3465/3465.ps.Z

We welcome your comments.  

Christian



	     Extraction of Rules from Discrete-Time
                   Recurrent Neural Networks

    Revised Technical Report CS-TR-3465 and UMIACS-TR-95-54
        University of Maryland, College Park, MD 20742


                Christian W. Omlin and C. Lee Giles
                      NEC Research Institute
                        4 Independence Way
                     Princeton, N.J. 08540 USA
              E-mail: {omlinc,giles}@research.nj.nec.com



                            ABSTRACT


The extraction of symbolic knowledge from trained neural networks and
the direct encoding of (partial) knowledge into networks prior to
training are important issues. They allow the exchange of information
between symbolic and connectionist knowledge representation.
 
The focus of this paper is on the quality of the rules that are
extracted from recurrent neural networks. Discrete-time recurrent
neural networks can be trained to correctly classify strings of a
regular language. Rules defining the learned grammar can be extracted
from networks in the form of deterministic finite-state automata
(DFA's) by applying clustering algorithms in the output space of 
recurrent state neurons. Our algorithm can extract different
finite-state automata that are consistent with a training set 
from the same network. We compare the generalization performances of
these different models and the trained network and we introduce a 
heuristic that permits us to choose among the consistent DFA's
the model which best approximates the learned grammar.

Keywords: Recurrent Neural Networks, Grammatical Inference,
	  Regular Languages, Deterministic Finite-State Automata,
          Rule Extraction, Generalization Performance, Model Selection,
          Occam's Razor.




More information about the Connectionists mailing list