conference announcement

kenm@sunrae.sscl.uwo.ca kenm at sunrae.sscl.uwo.ca
Wed Jan 11 05:08:44 EST 1995


******************************************************************************
   

                               L.O.V.E. 1995

                  24th Conference on Perception & Cognition


February 9, 1:00 pm -> February 10, 5:00 pm
The Skyline Brock Hotel
Niagara Falls, Ontario, Canada


******************************************************************************

We have a great lineup of speakers this year.


Thursday, February 9

Margaret M. Shiffrar
  Rutgers University
  The interpretation of object motion

Mark S. Seidenberg
  University of Southern California
  title t.b.a



Friday, February 10

Dana H. Ballard
  University of Rochester
  Computational hierarchies for natural behaviors

Lee R. Brooks   (& Glenn Regehr)
  McMaster University
  Perceptual resemblance and effort after commonality in category formation

Daniel Kersten
  University of Minnesota
  Shedding light on the objects of perception

Michael K. Tanenhaus
  University of Rochester
  Using eye movements to study spoken language comprehension in visual contexts

******************************************************************************

To reserve a hotel room, call Skyline Hotels at 1-800-263-7135 or 905-374-4444.
**Please make sure to mention the LOVE conference in order to get special
  room rates.


The L.O.V.E. room rates are great again this year:

$47  single or double
$57  triple
$67  quadruple


Registration is again wildly cheap this year and includes the "L.O.V.E. affair":

students and post docs:   $15 Canadian or $12 US
faculty:                  $25 Canadian or $20 US


Be sure to make L.O.V.E. in '95!!!


If wish to be added to the L.O.V.E. email list, please contact 
kenm at sunrae.sscl.uwo.ca

******************************************************************************

Abstracts appear below.

************************************************************************

The Interpretation of Object Motion

Margaret M. Shiffrar
Rutgers University

To interpret the projected image of a moving object, the visual system 
must integrate motion signals across different image regions.  
Traditionally, researchers have examined this process by focusing on 
the integration of equally ambiguous motion signals.  However, 
moving objects often yield motion measurements having differing 
degrees of ambiguity.  In a series of experiments, I examine how the 
visual system interprets the motion of simple objects.  I argue that the 
visual system normally uses unambiguous motion signals to interpret 
object motion.

************************************************************************

Mark S. Seidenberg
University of Southern California

title t.b.a.

************************************************************************

Computational Hiearchies for Natural Behaviors

Dana Ballard
University of Rochester

We argue that a computational theory of the brain will have to address
the issue of computational hierarchies, wherein the brain can be seen
as using different instruction sets at different spatio-temporal
scales. As examples, we describe two such abstraction levels.

At the most abstract level, a language is needed to address the way the
brain directs the physical resources of its body. An example of these
kinds of instructions would be one used to direct saccadic
eye-movements. Interpreting experimental data from this perspective
implies that subjects use eye-movements in a special strategy to avoid
loading short-term memory. This constraint has implications for the
organization of high-level behavior.

At a lower level of abstraction we consider a model of instructions
which capture the details of directing the eye-movements themselves.
This model makes extensive use of feedback. The implications of this
are that brain circuitry may have to be used in a very different ways
than traditionally proposed.

************************************************************************

Perceptual Resemblance and Effort After Commonality in Category
Formation

	    Lee Brooks               Glenn Regehr
        McMaster Univeristy      The Toronto Hospital

The study of category formation and application has been strongly
influenced by the information processing tradition.  The influence of
this tradition includes describing stimuli solely as informational
contrasts (the ubiquitous tables of 1s and 0s), as well as the practice
of producing new items by recombining identical elements.  Even when
experiments use natural stimuli, designs and subsequent models are set
up as if informational contrasts are the only important aspects of the
stimuli.  We will argue that at least two types of changes from this
tradition are necessary to capture important types of natural
behavior.

*Enhanced perceptual resemblance*: Habits of stimulus representation
and experimental design derived from the information processing
tradition have limited the effect of similarity between pairs of items
and virtually eliminated an effect of overall similarity among several
items.  In particular, the informational interpretation of "family
resemblance" does not produce categorization based on category-wide
similarity, as is often alleged.  A better treatment of similarity is
important because similarity-based effects are obvious even when people
have good theories about the stimuli, as in medical diagnosis.

*Multiple descriptions of the stimuli*: Informational description of
the stimuli effectively capture analytic behavior, but do not equally
well capture similarity-based behavior.  We will argue that stimuli
have to be characterized differently to account for their effects on
similarity-based processing than for their effects on analytic
processing.  Having these different descriptions for the same stimuli
is important since both types of processes occur concurrently in many
natural categorization situations.

************************************************************************

Shedding light on the objects of perception

Daniel Kersten
University of Minnesota

One of the great challenges of perception is to understand how we see
the material, shape, and identity of an object given enormous
variability in the images of that object. Viewpoint and illumination
act together to produce the images that the eye receives. Variability
over viewpoint has received recent attention in studies of object
recognition and shape.  Illumination effects of attached and cast
shadows have received somewhat less attention for the following reason.
Casual inspection shows that one view of an object can appear rather
different from another view of that same object. However, the image of
an object under one illumination can appear quite similar to an image
of the same object under different illumination, even when objectively
the images are very different. This latter observation has contributed
to the assumption that human perception  discounts  effects of varying
illumination, in particular those due to cast shadows. But do the
effects of illumination get filtered out?

I will use 3D computer graphics to show examples of how human vision
uses illumination information to resolve perceptual ambiguities. In
particular, I will show how cast shadows can determine relative depth
of objects, the orientation of surfaces, object rigidity, and the
identity of contour types. These demonstrations are examples of the
kind of perceptual puzzles which the visual brain solves continually in
everyday vision.  The solution of these perceptual puzzles  is an
example of generalized Bayesian inference--the logical and plausible
reconciliation of image data with prior constraints.

For an object recognition task, the visual system might be expected to
filter out effects of illumination (e.g. attached and cast shadows).
Here vision can be behave in a way inconsistent with a strong Bayesian
view--there is a cost in response time and sensitivity to recognizing
an object under left illumination that has been learned under right
illumination. These results are consistent with exemplar-based theories
of recognition.

************************************************************************

Using eye-movements to study spoken language comprehension in visual
contexts.

Michael K. Tanenhaus
University of Rochester

We have been using a head-mounted eye-tracking system to monitor
eye-movements while subjects follow spoken instructions to manipulate
real objects.  In this paradigm, eye-movements to the objects in visual
world, are eye- movements are closely time locked to referential
expressions in the instructions, providing a natural on- line measure
of spoken language comprehension in visual contexts.

After discussing the rationale for this line of research in terms of
current developments in language comprehension research, I'll present
results from experiments conducted with Michael Spivey-Knowlton, Julie
Sedivy and Kathy Eberhard.  In the first experiment, eye-movements to a
target object (e.g., Pick up the candle) begin several hundred ms after
the beginning of the word, suggesting that reference is established as
the word is being processed.  Eye-movements are delayed by about 100 ms
when there is a "competitor" object with a similar name as the target
(e.g., a piece of candy).  In the second experiment, the point in a
phrase where reference is established is time-locked to when the
referring expression becomes unambiguous with respect to the set of
visual alternatives (e.g., Touch the starred red square; Put the five
of hearts that is below the eight of clubs above the three of
diamonds.). The third experiment shows that visual contexts affect the
interpretation of temporarily ambiguous instructions such as "Put the
spoon in the bowl on the plate".  Finally, we show that contrastive
focus (e.g., Touch the LARGE red square) directs attention to both the
referent and the contrast member.  Taken together our results
demonstrate the potential of the methodology, especially for exploring
issues of interpretation and questions about spoken language
comprehension..  They also highlight the incremental and referential
nature of comprehension.  In addition, they provide a somewhat
different perspective on results that have been central to discussions
about the modularity of language processing.

************************************************************************



More information about the Connectionists mailing list