Tech Reports Available

Yeong-Ho Yu yu at cs.utexas.edu
Tue Apr 10 06:38:22 EDT 1990


  The following two technical reports are available.
 They will appear in the Proceedings of IJCNN90.


----------------------------------------------------------------------


                     EXTRA OUTPUT BIASED LEARNING


                  Yeong-Ho Yu  and Robert F. Simmons
                AI Lab, The University of Texas at Austin                   

                  March 1990          AI90-128

                            ABSTRACT

   One way to view feed-forward neural networks is to regard them as
mapping functions from the input space to the output space.  In this
view, the immediate goal of back-propagation in training such a
network is to find a correct mapping function among the set of all
possible mapping functions of the given topology.  However, finding a
correct one is sometimes not an easy task, especially when there are
local minima.  Moreover, it is harder to train a network so that it
can produce correct output not only for training patterns but for
novel patterns which the network has never seen before.  This
so-called generalization capability has been poorly understood, and
there is little guidance for achieving a better generalization.  This
paper presents a unified viewpoint for the training and generalization
of a feed-forward network, and a technique for improved training and
generalization based on this viewpoint.


------------------------------------------------------------------------


              DESCENDING EPSILON IN BACK-PROPAGATION:
              A TECHNIQUE FOR BETTER GENERALIZATION

                Yeong-Ho Yu  and Robert F. Simmons
             AI Lab, The University of Texas at Austin                   

                  March 1990          AI90-130

                          ABSTRACT

  There are two measures for the optimality of a trained feed-forward
network for the given training patterns.  One is the global error
function which is the sum of squared differences between target
outputs and actual outputs over all output units of all training
patterns.  The most popular training method, back-propagation based on
the Generalized Delta Rule, is to minimize the value of this function.
In this method, the smaller the global error is, the better the
network is supposed to be.  The other measure is the correctness ratio
which shows, when the network's outputs are converted into binary
outputs, for what percentage of training patterns the network
generates the correct binary outputs.  Actually, this is the measure
that often really matters.  This paper argues that those two measures
are not parallel and presents a technique with which the
back-propagation method results in a high correctness ratio.  The
results show that the trained networks with this technique often
exhibit high correctness ratios not only for the training patterns but
also for novel patterns.


-----------------------------------------------------------------------
To obtain copies, either:

a) use the getps script (by Tony Plate and Jordan Pollack, posted on
connectionists a few weeks ago)

b)
unix> ftp cheops.cis.ohio-state.edu          # (or ftp 128.146.8.62)
Name (cheops.cis.ohio-state.edu:): anonymous
Password (cheops.cis.ohio-state.edu:anonymous): <ret>
ftp> cd pub/neuroprose
ftp> binary
ftp> get
(remote-file) yu.output-biased.ps.Z 
(local-file) foo.ps.Z
ftp> get
(remote-file) yu.epsilon.ps.Z
(local-file) bar.ps.Z
ftp> quit
unix> uncompress foo.ps bar.ps
unix> lpr -P(your_local_postscript_printer) foo.ps bar.ps

c) If you have any problem of accessing the directory above,
   send a request to 
   
         yu at cs.utexas.edu

               or

            Yeong-Ho Yu
            AI  Lab
            The University of Texas at Austin
            Austin, TX 78712.


------------------------------------------------------------------------


More information about the Connectionists mailing list