Two tech reports

Gerry Tesauro panther!panther.UUCP!gjt at uxc.cso.uiuc.edu
Fri Aug 5 18:49:14 EDT 1988


Two new Center for Complex Systems Research Tech. Reports are
now available; the abstracts appear below. (A cautionary note:
CCSR-88-6 describes an obsolete network, and is of no use to
readers unfamiliar with backgammon.) Requests may be sent to:

gjt%panther at uxc.cso.uiuc.edu

or the US mail address which appears below.

------------------------
     Neural Network Defeats Creator in Backgammon Match

                         G. Tesauro

            Center for Complex Systems Research,
        University of Illinois at Urbana-Champaign,
          508 S. 6th St., Champaign, IL 61820 USA

               Technical Report No. CCSR-88-6

          This paper presents an annotated record of  a
     20-game  match  which  I played against one of the
     networks discussed in ``A  Parallel  Network  that
     Learns  to  Play Backgammon,'' by myself and Terry
     Sejnowski.  (Tech. Report CCSR-88-2,  and  Artifi-
     cial  Intelligence,  to  appear.)  This  paper  is
     specifically intended for  backgammon  enthusiasts
     who  want  to  see  exactly how the network plays.
     The surprising result of the match  was  that  the
     network  won,  11 games to 9. However, the network
     made several blunders during  the  course  of  the
     match,  and  was  extremely  lucky  to  have  won.
     Nevertheless, in spite of the network's worst-case
     play, its average performance in typical positions
     is quite sharp, and is more challenging than  con-
     ventional commercial programs.
------------------------
       Asymptotic Convergence of Back-Propagation in
                   Single-Layer Networks

                  Gerald Tesauro and Yu He

            Center for Complex Systems Research
         University of Illinois at Urbana-Champaign
          508 S. 6th St., Champaign, IL 61820 USA

                Technical Report No. CCSR-88-7

          We calculate analytically the rate of conver-
     gence at long times in the back-propagation learn-
     ing algorithm for networks without  hidden  units.
     For  the  standard  quadratic error function and a
     sigmoidal transfer  function,  we  find  that  the
     error decreases as 1/t for large t, and the output
     states approach their target values as  1/sqrt(t).
     It  is  possible to obtain a different convergence
     rate for certain error and transfer functions, but
     the  convergence  can  never  be  faster than 1/t.
     These results also hold when a  momentum  term  is
     added  to  the learning algorithm. Our calculation
     agrees with the numerical  results  of  Ahmad  and
     Tesauro.


More information about the Connectionists mailing list