Papers available by ftp

Dave Opitz opitz at cs.wisc.edu
Wed Jun 8 17:09:53 EDT 1994


The following three papers have been placed in an FTP repository at the
University of Wisconsin (abstracts appear at the end of the message).
These papers are also available on WWW via Mosaic. 
Type "Mosaic ftp://ftp.cs.wisc.edu/machine-learning/shavlik-group"
or "Mosaic http://www.cs.wisc.edu/~shavlik/uwml.html"
(for our group's "home page").

Opitz, D. W. & Shavlik, J. W. (1994). "Using genetic search to refine
   knowledge-based neural networks." Proceedings of the 11th International
   Conference on Machine Learning, New Brunswick, NJ. 

Craven, M. W. & Shavlik, J. W. (1994). "Using sampling and queries to extract
   rules from trained neural networks." Proceedings of the 11th International
   Conference on Machine Learning, New Brunswick, NJ. 

Maclin, R. & Shavlik, J. W. (1994). "Incorporating advice into agents that
   learn from reinforcements." Proceedings of the 12th National Conference
   on Artificial Intelligence (AAAI-94), Seattle, WA.
   (A longer version appears as UW-CS TR 1227.) 

----------------------

To retrieve the papers by ftp:

unix> ftp ftp.cs.wisc.edu
Name: anonymous
Password: (Your e-mail address)
ftp> binary
ftp> cd machine-learning/shavlik-group/
ftp> get opitz.mlc94.ps.Z
ftp> get craven.mlc94.ps.Z
ftp> get maclin.aaai94.ps.Z (or get maclin.tr94.ps.Z)
ftp> quit
unix> uncompress opitz.mlc94.ps.Z    (similarly for the other 2 papers)
unix> lpr opitz.mlc94.ps


==============================================================================

                       Using Genetic Search to Refine 
		       Knowledge-Based Neural Networks

                               David W. Opitz
                               Jude W. Shavlik

Abstract: An ideal inductive-learning algorithm should exploit all available
   resources, such as computing power and domain-specific knowledge,
   to improve its ability to generalize.  Connectionist theory-refinement
   systems have proven to be effective at utilizing domain-specific
   knowledge; however, most are unable to exploit available computing
   power.  This weakness occurs because they lack the ability to refine
   the topology of the networks they produce, thereby limiting generalization,
   especially when given impoverished domain theories.  We present
   the REGENT algorithm, which uses genetic algorithms to broaden the type
   of networks seen during its search.  It does this by using (a) the
   domain theory to help create an initial population and (b) crossover
   and mutation operators specifically designed for knowledge-based
   networks.  Experiments on three real-world domains indicate that
   our new algorithm is able to significantly increase generalization
   compared to a standard connectionist theory-refinement system,
   as well as our previous algorithm for growing knowledge-based networks.

==============================================================================

                 Using Sampling and Queries to Extract Rules
	                from Trained Neural Networks

			       Mark W. Craven
                               Jude W. Shavlik

Abstract:  Concepts learned by neural networks are difficult to understand
   because they are represented using large assemblages of real-valued
   parameters.  One approach to understanding trained neural networks is to
   extract symbolic rules that describe their classification behavior.  There
   are several existing rule-extraction approaches that operate by searching
   for such rules.  We present a novel method that casts rule extraction
   not as a search problem, but instead as a learning problem.
   In addition to learning from training examples, our method exploits
   the property that networks can be efficiently queried.  We describe
   algorithms for extracting both conjunctive and M-of-N rules, and
   present experiments that show that our method is more efficient than
   conventional search-based approaches.

==============================================================================

                     Incorporating Advice into Agents that
 	                  Learn from Reinforcements

			         Rich Maclin
                               Jude W. Shavlik

Abstract:  Learning from reinforcements is a promising approach for creating
   intelligent agents.  However, reinforcement learning usually requires a
   large number of training episodes.  We present an approach that addresses
   this shortcoming by allowing a connectionist Q-learner to accept advice
   given, at any time and in a natural manner, by an external observer.
   In our approach, the advice-giver watches the learner and occasionally
   makes suggestions, expressed as instructions in a simple programming 
   language.  Based on techniques from knowledge-based neural networks, 
   these programs are inserted directly into the agent's utility function.
   Subsequent reinforcement learning further integrates and refines the
   advice.  We present empirical evidence that shows our approach leads to
   statistically-significant gains in expected reward.  Importantly, the
   advice improves the expected reward regardless of the stage of training
   at which it is given.



More information about the Connectionists mailing list