3 reports available

Charles Squires squires at cs.wisc.edu
Wed Oct 9 03:22:37 EDT 1991


           *** PLEASE DO NOT FORWARD TO OTHER LISTS ***

The following three working papers have been placed in the neuroprose
archive:

-Maclin, R. and Shavlik, J.W., Refining Algorithms with Knowledge-Based
 Neural Networks:  Improving the Chou-Fasman Algorithm for Protein Folding,
 Machine Learning Research Group Working Paper 91-2.

      Neuroprose file name:  maclin.fskbann.ps.Z

-Scott, G.M., Shavlik, J.W., and Ray, W.H., Refining PID Controllers
 using Neural Networks, Machine Learning Research Group Working Paper 91-3.

      Neuroprose file name:  scott.nnpid.ps.Z

-Towell, G.G. and Shavlik, J.W., The Extraction of Refined Rules from
 Knowledge-Based Neural Networks, Machine Learning Research Group Working
 Paper 91-4.

      Neuroprose file name:  towell.interpretation.ps.Z

The abstract of each paper and ftp instructions follow:

----------

    Refining Algorithms with Knowledge-Based Neural Networks:
     Improving the Chou-Fasman Algorithm for Protein Folding

                         Richard Maclin
                         Jude W. Shavlik

                     Computer Sciences Dept.
                University of Wisconsin - Madison
                    email: maclin at cs.wisc.edu


     We describe a method for using machine  learning  to  refine
algorithms represented as generalized finite-state automata.  The
knowledge in an automaton  is  translated  into  a  corresponding
artificial   neural   network,   and  then  refined  by  applying
backpropagation  to  a  set  of  examples.   Our  technique   for
translating  an  automaton  into  a  network  extends  the  KBANN
algorithm, a system that translates a set of propositional,  non-
recursive   rules  into  a  corresponding  neural  network.   The
topology and weights of the neural network are set  by  KBANN  so
that  the  network  represents  the  knowledge  in the rules.  We
present the extended system, FSKBANN, which  augments  the  KBANN
algorithm  to handle finite-state automata.  We employ FSKBANN to
refine the Chou-Fasman algorithm, a  method  for  predicting  how
globular  proteins  fold.   The  Chou-Fasman  algorithm cannot be
elegantly  formalized  using  non-recursive  rules,  but  can  be
concisely  described  as  a  finite-state  automaton.   Empirical
evidence shows that the refined  algorithm  FSKBANN  produces  is
statistically  significantly more accurate than both the original
Chou-Fasman algorithm and a  neural  network  trained  using  the
standard  approach.   We also provide extensive statistics on the
type of errors each of the three approaches makes and discuss the
need  for better definitions of solution quality for the protein-
folding problem.

----------

              Refining PID Controllers using Neural Networks

                   Gary M. Scott (Chemical Engineering)
                   Jude W. Shavlik (Computer Sciences)
                   W. Harmon Ray (Chemical Engineering)
                     University of Wisconsin


     The  KBANN  (Knowledge-Based  Artificial  Neural   Networks)
approach  uses  neural  networks  to refine knowledge that can be
written in the form of simple  propositional  rules.   We  extend
this idea further by presenting the MANNCON (Multivariable Artif-
icial Neural Network Control) algorithm by which the mathematical
equations governing a PID (Proportional-Integral-Derivative) con-
troller determine the topology and initial weights of a  network,
which  is  further  trained using backpropagation.  We apply this
method to the task of controlling the outflow and temperature  of
a water tank, producing statistically- significant gains in accu-
racy over both a standard neural  network  approach  and  a  non-
learning PID controller.  Furthermore, using the PID knowledge to
initialize the weights of the network produces statistically less
variation  in testset accuracy when compared to networks initial-
ized with small random numbers.

----------

     The Extraction of Refined Rules from Knowledge-Based Neural Networks

                         Geoffrey G. Towell
                           Jude W. Shavlik
            
                   Department of Computer Science
                      University of Wisconsin
                 E-mail Address: towell at cs.wisc.edu

Neural networks, despite their empirically-proven abilities, have been little
used for the refinement of existing knowledge because this task requires a
three-step process. First, knowledge in some form must be inserted into a
neural network. Second, the network must be refined. Third, knowledge must be
extracted from the network. We have previously described a method for the
first step of this process. Standard neural learning techniques can accomplish
the second step. In this paper, we propose and empirically evaluate a method
for the final, and possibly most difficult, step. This method efficiently
extracts symbolic rules from trained neural networks. The four major results
of empirical tests of this method are that the extracted rules:
(1) closely reproduce (and can even exceed) the accuracy
    of the network from which they are extracted; 
(2) are superior to the rules produced by
    methods that directly refine symbolic rules; 
(3) are superior to those produced by 
    previous techniques for extracting rules from trained neural networks;
(4) are ``human comprehensible.'' 
Thus, the method demonstrates that neural networks can be an effective tool
for the refinement of symbolic knowledge.  Moreover, the rule-extraction
technique developed herein contributes to the understanding of how symbolic
and connectionist approaches to artificial intelligence can be profitably
integrated.

----------

FTP Instructions:

     unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52)
     Name: anonymous
     Password: neuron
     ftp> cd pub/neuroprose
     ftp> binary

     ftp> get maclin.fskbann.ps.Z
OR...     get scott.nnpid.ps.Z
OR...     get towell.interpretation.ps.Z

     ftp> quit

     unix> uncompress maclin.fskbann.ps.Z
OR...      uncompress scott.nnpid.ps.Z
OR...      uncompress towell.interpretation.ps.Z

     unix> lpr maclin.fskbann.ps
OR...      lpr scott.nnpid.ps
OR...      lpr towell.interpretation.ps
           (or use whatever command you use to print PostScript)



More information about the Connectionists mailing list