Tech report abstracts
A Buggy AI Program
honavar at cs.wisc.edu
Wed Nov 30 18:23:01 EST 1988
The following technical reports are now available.
Requests for copies may be sent to:
Linda McConnell
Technical reports librarian
Computer Sciences Department
University of Wisconsin-Madison
1210 W. Dayton St.
Madison, WI 53706.
USA.
or by e-mail, to: linda at shorty.cs.wisc.edu
PLEASE DO NOT REPLY TO THIS MESSAGE, BUT WRITE TO THE
TECH REPORTS LIBRARIAN FOR COPIES.
-- Vasant
-------------------------------------------------------------------
Computer Sciences TR 793 (also in the proceedings of the 1988
connectionist models summer school, (ed) Sejnowski, Hinton, and
Touretzky, Morgan Kauffmann, San Mateo, CA)
A NETWORK OF NEURON-LIKE UNITS THAT LEARNS TO PERCEIVE
BY GENERATION AS WELL AS REWEIGHTING OF ITS LINKS
Vasant Honavar and Leonard Uhr
Computer Sciences Department
University of Wisconsin-Madison
Madison, WI 53706. U.S.A.
Abstract
Learning in connectionist models typically involves the modif-
ication of weights associated with the links between neuron-like
units; but the topology of the network does not change. This paper
describes a new connectionist learning mechanism for generation in
a network of neuron-like elements that enables the network to
modify its own topology by growing links and recruiting units as
needed (possibly from a pool of available units). A combination of
generation and reweighting of links, and appropriate brain-like
constraints on network topology, together with regulatory mechan-
isms and neuronal structures that monitor the network's performance
that enable the network to decide when to generate, is shown capa-
ble of discovering, through feedback-aided learning, substantially
more powerful, and potentially more practical, networks for percep-
tual recognition than those obtained through reweighting alone.
The recognition cones model of perception (Uhr1972, Hona-
var1987, Uhr1987) is used to demonstrate the feasibility of the
approach. Results of simulations of carefully pre-designed recog-
nition cones illustrate the usefulness of brain-like topological
constraints such as near-neighbor connectivity and converging-
diverging heterarchies for the perception of complex objects (such
as houses) from digitized TV images. In addition, preliminary
results indicate that brain-structured recognition cone networks
can successfully learn to recognize simple patterns (such as
letters of the alphabet, drawings of objects like cups and apples),
using generation-discovery as well as reweighting, whereas systems
that attempt to learn using reweighting alone fail to learn.
-------------------------------------------------------------------
Computer Sciences TR 805
Experimental Results Indicate that
Generation, Local Receptive Fields and Global Convergence
Improve Perceptual Learning in Connectionist Networks
Vasant Honavar and Leonard Uhr
Computer Sciences Department
University of Wisconsin-Madison
Abstract
This paper presents and compares results for three types of
connectionist networks:
[A] Multi-layered converging networks of neuron-like units, with
each unit connected to a small randomly chosen subset of units
in the adjacent layers, that learn by re-weighting of their
links;
[B] Networks of neuron-like units structured into successively
larger modules under brain-like topological constraints (such
as layered, converging-diverging heterarchies and local recep-
tive fields) that learn by re-weighting of their links;
[C] Networks with brain-like structures that learn by generation-
discovery, which involves the growth of links and recruiting
of units in addition to re-weighting of links.
Preliminary empirical results from simulation of these net-
works for perceptual recognition tasks show large improvements in
learning from using brain-like structures (e.g., local receptive
fields, global convergence) over networks that lack such structure;
further substantial improvements in learning result from the use of
generation in addition to reweighting of links. We examine some of
the implications of these results for perceptual learning in con-
nectionist networks.
-------------------------------------------------------------------
More information about the Connectionists
mailing list