NIPS 95 workshop schedule

Eric Mjolsness emj at cs.ucsd.edu
Sun Nov 26 02:03:22 EST 1995




NIPS-95 Workshop, ** Tentative Schedule **

Statistical and Structural Models in Network Vision

Friday, November 1, 1995
Organizers: Eric Mjolsness and Anand Rangarajan
URL: http://www-cse.ucsd.edu/users/emj/workshop95.html


Overview:

Neural network and model-based approaches to vision are usually
regarded as opposing tendencies.  Whereas neural net methods often
focus on images and learned feature detectors, model-based methods
concentrate on matching high-level representations of objects and
their parts and other intrinsic properties.  It is possible that
the two approaches can be integrated in the context of statistical
models which have the flexibility to represent patterns in both image
space and in higher-level object feature spaces.  The workshop will
examine the possibilities for and progress in formulating such
models for vision problems, particularly those models which can
result in neural network architectures.

*Tentative Schedule*

7:30am - 7:50am Eric Mjolsness, "Workshop Overview"
7:50am - 8:15am Chris Bregler, "Soft Features for Soft Classifiers"
8:15am - 8:40am Hayit Greenspan, "Preprocessing and Learning in
		Rotation-Invariant Texture Recognition"
8:40am - 9:05am Alan Yuille, "Deformable Templates for Object Recognition:
		Geometry and Lighting"
9:05am - 9:30am Anand Rangarajan, "Bayesian Tomographic Reconstruction
		using Mechanical Models as Priors"

9:30am - 4:30pm Excercises

4:30pm - 4:50pm Paul Viola, "Recognition by Complex Features"
4:50pm - 5:15pm Lawrence Staib, "Model-based Parametrically Deformable
		Boundary Finding"
5:15pm - 5:40pm Steven Gold, "Recognizing Objects with Recurrent
		Neural Networks by Matching Structural Models"
5:40pm - 6:05pm Robert M. Haralick, "Annotated Computer Vision Data Sets"
6:05pm - 6:30pm Eric Saund, "Representations for Perceptual
		Level Chunks in Line Drawings"


*Abstracts*

Chris Bregler, U.C.Berkeley, "Soft Features for Soft Classifiers"
 
Most connectionist approaches that are applied to visual domains either
make little use of any preprocessing or are based on very high level
input representations.  The former solutions are motivated by the
concern not to lose any useful information for the final classification
and show how powerful such algorithms are in extracting relevant
features automatically. "Hard" decisions like edge detectors, line-finders,
etc. don't fit into the philosophy of adaptability across all levels.
We attempt to find a balance between both extrema and show how mature
"soft"-preprocessing techniques like rich sets of scaled and rotated
gaussian derivatives, second moment texture statistics, and Hierachical
Mixtures of Experts can be applied to the domain of car classification.


Steven Gold, Yale University, "Recognizing Objects with Recurrent
Neural Networks by Matching Structural Models"
                     
    Attributed Relational Graphs (ARG) are used to create structural
models of objects. Recently developed optimization techniques that
have emerged out of the neural network/statistical physics framework
are then used to construct algorithms to match ARGs. Experiments
conducted on ARGs generated from images are presented.


Hayit Greenspan, Caltech, "Preprocessing and Learning
in Rotation-Invariant Texture Recognition"

A number of texture recognition systems have been recently proposed in
the literature, giving very high-accuracy classification rates.
Almost all of these systems fail miserably when invariances, such as 
in rotation or scale, are to be included; invariant recognition is
clearly the next major challenge in recognition systems. 

Rotation invariance can be achieved in one of two ways, either by
extracting rotation-invariant features, or by appropriate training of
the classifier to make it "learn" invariant properties.
Learning invariances from the raw data is substantially influenced by
the rotation angles that have been included in the system's
training set. The more examples, the better the performance.
We compare to that a mechanism to extract the rotation-invariant
features {\em prior} to the learning phase. We introduce a powerful
image representation space with the use of a steerable filter set;
along with a new encoding scheme to extract the invariant features.

Our results strongly indicate the advantage of extracting a powerful
image-representation prior to the learning process; with savings
in both storage and computational complexity.  Rotation-invariant
texture recognition results and demo will be shown.


Robert M. Haralick, University of Washington,
"Annotated Computer Vision Data Sets"

Recognizing features with a protocol that learns
from examples, requires that there be many example
instances. In this talk, we describe the 78 image
annotated data set of RADIUS images, of 2 different
3D scenes, which we
have prepared for CDROM distribution. The feature annotation
includes building edges, shadow edges, clutter edges,
and corner positions. As well the image data set has
photogrammetric data of corresponding 3D and 2D points
and corresponding 2D pass points. The interior orientation
and exterior orientation parameters for all images are
given. The availability of this data set makes
possible comparison of different algorithms and
makes possible very careful experiments of
feature extraction using neural net approaches.


Eric Mjolsness, "Workshop Overview"

I will introduce the workshop and discuss possibilities
for integration between some of the research directions
represented by the participants.


Anand Rangarajan, Yale University, "Bayesian Tomographic Reconstruction
using mechanical models as priors"

We introduce a new prior---the weak plate---to tomographic reconstruction.
MAP estimates are obtained via a deterministic annealing Generalized EM
algorithm which avoids poor local minima. Bias/variance simulation
results on an autoradiograph phantom demonstrate the superiority of the
weak plate prior over other first-order priors used in the literature.


Eric Saund, Xerox Palo Alto Research Center,
"Representations for Perceptual Level Chunks in Line Drawings"
	
	In a line drawing, what makes a box a box?  A perfectly drawn
box is easy to recognize because it presents a remarkable conjunction
of crisp spatial properties yielding a wealth of necessary and sufficient
conditions to test.  But if it is drawn sloppily, many ideal properties
such as closure, squareness, and straightness of the sides, give way.
In addressing this problem, most attendants of NIPS probably would look
to the supple adaptability and warm fuzziness of statistical approaches
over the cold fragility of logic-based specifications.  Even in doing
this, however, some representations generalize better than others.
This talk will address alternative representations for visual objects
presented at an elemental level as curvilinear lines, with a look to
densely overlapping distributed representations in which a large number
of properties can negotiate their relative significance.


Lawrence Staib, Yale University, "Model-based Parametrically
Deformable Boundary Finding"

This work describes a global shape parametrization for smoothly deformable
curves and surfaces.  This representation is used as a model for geometric
matching to image data.  The parametrization represents the curve or
surface using sinusoidal basis functions and allows a wide variety of
smooth boundaries to be described with a small number of parameters.
Extrinsic model-based information is incorporated by introducing prior
probabilities on the parameters based on a sample of objects.  Boundary
finding is then formulated as an optimization problem.  The method has
been applied to synthetic images and three-dimensional medical images of
the heart and brain.


Paul Viola, Salk Institute, "Recognition by Complex Features"




More information about the Connectionists mailing list