Abstracts for 3 papers in neuroprose

W. Leow leow%pav.mcc.com at mcc.com
Wed Dec 30 16:23:17 EST 1992


The following 3 papers have been placed in the neuroprose archive
(sorry, no hard copies available):

-------------------------------------------------------------------
Representing Visual Schemas in Neural Networks for Object Recognition

Wee Kheng Leow and Risto Miikkulainen
Department of Computer Sciences
The University of Texas at Austin, Austin, TX 78712
leow,risto at cs.utexas.edu

Technical Report AI92-190
December 1992

This research focuses on the task of recognizing objects in simple
scenes using neural networks. It addresses two general problems in
neural network systems: (1) processing large amounts of input with
limited resources, and (2) the representation and use of structured
knowledge. The first problem arises because no practical neural network
can process all the visual input simultaneously and efficiently. The
solution is to process a small amount of the input in parallel, and
successively focus on other parts of the input. This strategy requires
that the system maintains structured knowledge for describing and
interpreting successively gathered information.
The proposed system, VISOR, consists of two main modules. The Low-Level
Visual Module (simulated using procedural programs) extracts featural
and positional information from the visual input. The Schema Module
(implemented with neural networks) encodes structured knowledge about
possible objects, and provides top-down information for the Low-Level
Visual Module to focus attention at different parts of the scene.
Working cooperatively with the Low-Level Visual Module, it builds a
globally consistent interpretation of successively gathered visual
information.

-------------------------------------------------------------------
Self-Organization with Lateral Connections

Joseph Sirosh and Risto Miikkulainen
Department of Computer Sciences
The University of Texas at Austin, Austin, TX 78712
sirosh,risto at cs.utexas.edu

Technical Report AI92-191
December 1992

A self-organizing neural network model for the development of afferent
and lateral input connections in cortical feature maps is presented.
The weight adaptation process is purely activity-dependent,
unsupervised, and local.  The afferent input weights self-organize into
a topological map of the input space. At the same time, the lateral
interaction weights develop a smooth ``Mexican hat'' shaped
distribution.  Weak lateral connections die off, leaving a pattern of
connections that represents the significant long-term correlations of
activity on the feature map. The model demonstrates how
self-organization can bootstrap itself based on input information only,
without global supervision or predetermined lateral interaction. The
model can potentially account for experimental observations such as
critical periods for self-organization in cortical maps and development
of horizontal connections in the primary visual cortex.

-------------------------------------------------------------------
Incremental grid growing: Encoding high-dimensional structure into a
two-dimensional feature map

Justine Blackmore and Risto Miikkulainen
Department of Computer Sciences
The University of Texas at Austin, Austin, TX 78712
justine,risto at cs.utexas.edu

Technical Report AI92-192
December 1992

Knowledge of clusters and their relations is important in understanding
high-dimensional input data with unknown distribution.  Ordinary feature
maps with fully connected, fixed grid topology cannot properly reflect
the structure of clusters in the input space---there are no cluster
boundaries on the map.  Incremental feature map algorithms, where nodes
and connections are added to or deleted from the map according to the
input distribution, can overcome this problem.  However, so far such
algorithms have been limited to maps that can be drawn in 2-D only in
the case of 2-dimensional input space.  In the approach proposed in this
paper, nodes are added incrementally to a regular, 2-dimensional grid,
which is drawable at all times, irrespective of the dimensionality of
the input space. The process results in a map that explicitly represents
the cluster structure of the high-dimensional input.

-------------------------------------------------------------------
The standard instructions apply:
Use getps, or:

Unix> ftp archive.cis.ohio-state.edu
Name: anonymous
Password:<your login id or whatever>
ftp> binary
ftp> cd pub/neuroprose
ftp> get leow.visual-schemas.ps.Z
ftp> get sirosh.lateral.ps.Z
ftp> get blackmore.incremental.ps.Z
ftp> quit

Unix> zcat leow.visual-schemas.ps.Z |lpr
Unix> zcat blackmore.visual-schemas.ps.Z |lpr
Unix> uncompress sirosh.lateral.ps.Z
Unix> lpr -s sirosh.lateral.ps

sirosh.lateral.ps is over 5MB uncompressed (it is only 16 pages, but has
huge figures). If your laserwriter does not have that much memory, most
likely you will need to lpr -s, or use psselect or psrev or other such
utility to print in smaller chunks.

Enjoy!


More information about the Connectionists mailing list