Tech reports available

GINDI%GINDI@Venus.YCC.Yale.Edu GINDI%GINDI at Venus.YCC.Yale.Edu
Fri May 19 09:54:00 EDT 1989


The following two tech reports are now available. Please send requests to
GINDI at VENUS.YCC.YALE.EDU or by physical mail to:

		Gene Gindi
		Yale University
		Department of Electrical Engineering
		P.O. Box 2157 , Yale Station
		New Haven, CT 06520

-------------------------------------------------------------------------------
		Yale University, Dept. Electrical Engineering
		Center for Systems Science
		TR- 8903


	Neural Networks for Object Recognition within Compositional 
		Hierarches: Initial Experiments


        Joachim Utans, Gene Gindi *
	Dept. Electrical Engineering
	Yale University
	P.O. Box 2157, Yale Station
	New Haven CT 06520
       *(to whom correspondence should be addressed)	

        Eric Mjolsness, P. Anandan
	Dept. Computer Science
	Yale University
	New Haven CT 06520

			Abstract

We describe experiments with TLville, a neural-network for object recognition.
The task is to recognize, in a translation-invariant manner, simple stick
figures. We formulate the recognition task as the problem of matching a graph of
model nodes to a graph of data nodes. Model nodes are simply user-specified
labels for objects such as "vertical stick" or "t-junction"; data nodes are
parameter vectors, such as (x,y,theta), of entities in the data. We use an
optimization approach where an appropriate objective function specifies both the
graph-matching problem and an analog neural net to carry out the optimization.
Since the graph structure of the data is not known a priori; it must be computed
dynamically as part of the optimization. The match metrics are model-specific
and are invoked selectively, as part of the optimization, as various candidate
matches of model-to-data occur. The network supports notions of abstraction in
that the model nodes express compositional hierarchies involving object-part
relationships. Also, a data node matched to an whole object contains a
dynamically computed parameter vector which is an abstraction summarizing the
parameters of data nodes matched to the constituent parts of the whole. Terms in
the match metric specify the desired abstraction. In addition, a solution to the
problem of computing  a transformation from retinal to object-centered
coordinates to support recognition is offered by this kind of network; the
transformation is contained as part of the objective function in the form of the
match metric. In experiments, the network usually succeeds in recognizing single
or multiple instances of a single composite model amid instances of non-models,
but it gets trapped in unfavorable local minima of the 5th-order objective when
multiple composite objects are encoded in the database. 


-------------------------------------------------------------------------------
		Yale University, Dept. Electrical Engineering
		Center for Systems Science
		TR- 8908

         Stickville: A Neural Net for Object Recognition via                   
                       Graph Matching 


        Grant Shumaker 
	School of Medicine, Yale University, New Haven, CT 06510

        Gene Gindi
	Department of Electrical Engineering, Yale University 
	P.O. Box 2157 Yale Station, New Haven,CT 06520
        (to whom correspondence should be addressed)

        Eric Mjolsness, P.Anandan
	Department of Computer Science, Yale University, New Haven, CT 06510

			Abstract

An objective function for model-based object recognition is formulated and used
to specify a neural network whose dynamics carry out the optimization, and hence
the recognition task.  Models are specified as graphs that capture structural
properties of shapes to be recognized.  In addition, compositional (INA) and
specialization (ISA) hierarchies are imposed on the models
as an aid to indexing and are represented in the objective function as sparse
matrices. Data are also represented as a graph.  The optimization is a
graph-matching procedure whose dynamical variables are ``neurons'' hypothesizing
matches between data and model nodes.  The dynamics are specified as a
third-order Hopfield-style network augmented by hard constraints implemented by
``Lagrange multiplier'' neurons.  Experimental results are shown for recognition
in Stickville, a domain of 2-D stick figures.  For small databases, the network
successfully recognizes both an object and its specialization. 

	----------------------------------------------



More information about the Connectionists mailing list