2 papers on robot learning

Sebastian Thrun thrun at uran.cs.bonn.edu
Tue Feb 15 08:25:02 EST 1994



This is to announce two recent papers in the connectionists' archive.
Both papers deal with robot learning issues. The first paper describes
two learning approaches (EBNN with reinforcement learning, COLUMBUS),
and the second paper gives some empirical results for learning robot
navigation using reinforcement learning and EBNN.  Both approaches have
been evaluated using real robot hardware.

Enjoy reading!
Sebastian



------------------------------------------------------------------------


                   LIFELONG ROBOT LEARNING


         Sebastian Thrun              Tom Mitchell
        University of Bonn    Carnegie Mellon University

Learning provides a useful tool for the automatic design of autonomous
robots.  Recent research on learning robot control has predominantly
focussed on learning single tasks that were studied in isolation.  If
robots encounter a multitude of control learning tasks over their
entire lifetime, however, there is an opportunity to transfer knowledge
between them. In order to do so, robots may learn the invariants of the
individual tasks and environments. This task-independent knowledge can
be employed to bias generalization when learning control, which reduces
the need for real-world experimentation.  We argue that knowledge
transfer is essential if robots are to learn control with moderate
learning times in complex scenarios.  Two approaches to lifelong robot
learning which both capture invariant knowledge about the robot and its
environments are reviewed.  Both approaches have been evaluated using a
HERO-2000 mobile robot.  Learning tasks included navigation in unknown
indoor environments and a simple find-and-fetch task.


                                          (Technical Report IAI-TR-93-7,
                                           Univ. of Bonn, CS Dept.)


------------------------------------------------------------------------



           AN APPROACH TO LEARNING ROBOT NAVIGATION

                Sebastian Thrun. Univ. of Bonn


Designing robots that can learn by themselves to perform complex
real-world tasks is still an open challenge for the fields of Robotics
and Artificial Intelligence.  In this paper we describe an approach to
learning indoor robot navigation through trial-and-error.  A mobile
robot, equipped with visual, ultrasonic and infrared sensors, learns to
navigate to a designated target object.  In less than 10 minutes
operation time, the robot is able to learn to navigate to a marked
target object in an office environment.  The underlying learning
mechanism is the explanation-based neural network (EBNN) learning
algorithm. EBNN initially learns function from scratch using neural
network representations. With increasing experience, EBNN employs
domain knowledge to explain and to analyze training data in order to
generalize in a knowledgeable way.


                                (to appear in: Proceedings of the 
                                 IEEE Conference on Intelligent
                                 Robots and Systems 1994)

------------------------------------------------------------------------


Postscript versions of both papers may be retrieved from Jordan
Pollack's neuroprose archive by following the instructions below.

	unix>           ftp archive.cis.ohio-state.edu

	ftp login name> anonymous
	ftp password>   xxx at yyy.zzz
	ftp>            cd pub/neuroprose
	ftp>		bin
	ftp>            get thrun.lifelong-learning.ps.Z
	ftp>            get thrun.learning-robot-navg.ps.Z
	ftp>            bye

	unix>           uncompress thrun.lifelong-learning.ps.Z
	unix>           uncompress thrun.learning-robot-navg.ps.Z
	unix>           lpr thrun.lifelong-learning.ps.Z
	unix>           lpr thrun.learning-robot-navg.ps.Z




More information about the Connectionists mailing list