Workshop on Self-Organization and Unsupervised Learning in Vision

Jonathan Marshall marshall at cs.unc.edu
Fri Nov 29 20:46:55 EST 1991


			    (Please post)


	    PROGRAM:  NIPS*91 Post-Conference Workshop on
				   
	SELF-ORGANIZATION AND UNSUPERVISED LEARNING IN VISION
				   
		 December 6-7, 1991 in Vail, Colorado
				   
		 Workshop Chair: Jonathan A. Marshall
				   
       Department of Computer Science, CB 3175, Sitterson Hall
   University of North Carolina, Chapel Hill, NC 27599-3175, U.S.A.   
		  919-962-1887, marshall at cs.unc.edu

Substantial neurophysiological and psychophysical evidence suggests
that visual experience guides or directs the formation of much of the
fine structure of animal visual systems.  Simple unsupervised learning
procedures (e.g., Hebbian rules) using winner-take-all or local
k-winner networks have been applied with moderate success to show how
visual experience can guide the self-organization of visual mechanisms
sensitive to low-level attributes like orientation, contrast, color,
stereo disparity, and motion.  However, such simple networks lack the
more sophisticated capabilities needed to demonstrate self-organized
development of higher-level visual mechanisms for segmentation,
grouping/binding, selective attention, representation of occluded or
amodal visual features, resolution of uncertainty, generalization,
context-sensitivity, and invariant object recognition.

A variety of enhancements to the simple Hebbian model have been
proposed.  These include anti-Hebbian rules, maximization of mutual
information, oscillatory interactions, intraneuronal interactions,
steerable receptive fields, pre- vs. post-synaptic learning rules,
covariance rules, addition of behavioral (motor) information, and
attentional gating.  Are these extensions to unsupervised learning
sufficiently powerful to model the important aspects of
neurophysiological development of higher-level visual functions?

Some of the specific questions that the workshop will address are:

  o Does our visual environment provide enough information to direct
    the formation of higher-level visual processing mechanisms?

  o What kinds of information (e.g., correlations, constraints,
    coherence, and affordances) can be discovered in our visual world,
    using unsupervised learning?

  o Can such higher-level visual processing mechanisms be formed by
    unsupervised learning?  Or is it necessary to appeal to external
    mechanisms such as evolution (genetic algorithms)?

  o Are there further enhancements that can be made to improve the
    performance and capabilities of unsupervised learning rules for
    vision?

  o What neurophysiological evidence is available regarding these
    possible enhancements to models of unsupervised learning?

  o What aspects of the development of visual systems must be
    genetically pre-wired, and what aspects can be guided or directed
    by visual experience?

  o How is the output of an unsupervised network stage used in
    subsequent stages of processing?

  o How can behaviorally relevant (sensorimotor) criteria become
    incorporated into visual processing mechanisms, using unsupervised
    learning?

This 2-day informal workshop brings together researchers in visual
neuroscience, visual psychophysics, and neural network modeling.
Invited speakers from these communities will briefly discuss their
views and results on relevant topics.  In discussion periods, we will
examine and compare these results in detail.

The workshop topic is crucial to our understanding of how animal
visual systems got the way they are.  By addressing this issue
head-on, we may come to understand better the factors that shape the
structure of animal visual systems, and we may become able to build
better computational models of the neurophysiological processes
underlying vision.


----------------------------------------------------------------------


			     PROGRAM


	   FRIDAY MORNING, December 6, 7:30-9:30 a.m.

Daniel Kersten, Department of Psychology, University of Minnesota.
  "Environmental structure and scene perception: Perceptual
   representation of material, shape, and lighting"

David C. Knill, Center for Research in Learning, Perception, and
Cognition, University of Minnesota.
  "Environmental structure and scene perception: The nature of
   visual cues for 3-D scene structure"

DISCUSSION

Edward M. Callaway, Department of Neurobiology, Duke University.
  "Development of clustered intrinsic connections in cat striate
   cortex"

Michael P. Stryker, Department of Physiology, University of
California at San Francisco.
  "Problems and promise of relating theory to experiment in models
   for the development of visual cortex"

DISCUSSION


	  FRIDAY AFTERNOON, December 6, 4:30-6:30 p.m.

Joachim M. Buhmann, Lawrence Livermore National Laboratory.
  "Complexity optimized data clustering by competitive neural
   networks"

Nicol G. Schraudolph, Department of Computer Science,
University of California at San Diego.
  "The information transparency of sigmoidal nodes"

DISCUSSION

Heinrich H. Bulthoff, Department of Cognitive and Linguistic
Sciences, Brown University.
  "Psychophysical support for a 2D view interpolation theory of
   object recognition"

John E. Hummel, Department of Psychology, University of
California at Los Angeles.
  "Structural description and self organizing object
   classification"

DISCUSSION


	  SATURDAY MORNING, December 7, 7:30-9:30 a.m.

Allan Dobbins, Computer Vision and Robotics Laboratory, McGill
University.
  "Local estimation of binocular optic flow"

Alice O'Toole, School of Human Development, The University of
Texas at Dallas.
  "Recent psychophysics suggesting a reformulation of the
   computational problem of structure-from-stereopsis"

DISCUSSION

Jonathan A. Marshall, Department of Computer Science, University
of North Carolina at Chapel Hill.
  "Development of perceptual context-sensitivity in unsupervised
   neural networks: Parsing, grouping, and segmentation"

Suzanna Becker, Department of Computer Science, University of
Toronto.
  "Learning perceptual invariants in unsupervised connectionist
   networks"

Albert L. Nigrin, Department of Computer Science and Information
Systems, American University.
  "Using Presynaptic Inhibition to Allow Neural Networks to Perform   
   Translational Invariant Recognition

DISCUSSION


	 SATURDAY AFTERNOON, December 7, 4:30-7:00 p.m.

Jurgen Schmidhuber, Department of Computer Science, University
of Colorado.
  "Learning non-redundant codes by predictability minimization"

Laurence T. Maloney, Center for Neural Science, New York
University.
  "Geometric calibration of a simple visual system"

DISCUSSION

Paul Munro, Department of Information Science, University of
Pittsburgh.
  "Self-supervised learning of concepts"

Richard Zemel, Department of Computer Science, University of
Toronto.
  "Learning to encode parts of objects"

DISCUSSION


WRAP-UP, 6:30-7:00


----------------------------------------------------------------------

			    (Please post)

			    (Please post)


	    PROGRAM:  NIPS*91 Post-Conference Workshop on
				   
	SELF-ORGANIZATION AND UNSUPERVISED LEARNING IN VISION
				   
		 December 6-7, 1991 in Vail, Colorado
				   
		 Workshop Chair: Jonathan A. Marshall
				   
       Department of Computer Science, CB 3175, Sitterson Hall
   University of North Carolina, Chapel Hill, NC 27599-3175, U.S.A.   
		  919-962-1887, marshall at cs.unc.edu

Substantial neurophysiological and psychophysical evidence suggests
that visual experience guides or directs the formation of much of the
fine structure of animal visual systems.  Simple unsupervised learning
procedures (e.g., Hebbian rules) using winner-take-all or local
k-winner networks have been applied with moderate success to show how
visual experience can guide the self-organization of visual mechanisms
sensitive to low-level attributes like orientation, contrast, color,
stereo disparity, and motion.  However, such simple networks lack the
more sophisticated capabilities needed to demonstrate self-organized
development of higher-level visual mechanisms for segmentation,
grouping/binding, selective attention, representation of occluded or
amodal visual features, resolution of uncertainty, generalization,
context-sensitivity, and invariant object recognition.

A variety of enhancements to the simple Hebbian model have been
proposed.  These include anti-Hebbian rules, maximization of mutual
information, oscillatory interactions, intraneuronal interactions,
steerable receptive fields, pre- vs. post-synaptic learning rules,
covariance rules, addition of behavioral (motor) information, and
attentional gating.  Are these extensions to unsupervised learning
sufficiently powerful to model the important aspects of
neurophysiological development of higher-level visual functions?

Some of the specific questions that the workshop will address are:

  o Does our visual environment provide enough information to direct
    the formation of higher-level visual processing mechanisms?

  o What kinds of information (e.g., correlations, constraints,
    coherence, and affordances) can be discovered in our visual world,
    using unsupervised learning?

  o Can such higher-level visual processing mechanisms be formed by
    unsupervised learning?  Or is it necessary to appeal to external
    mechanisms such as evolution (genetic algorithms)?

  o Are there further enhancements that can be made to improve the
    performance and capabilities of unsupervised learning rules for
    vision?

  o What neurophysiological evidence is available regarding these
    possible enhancements to models of unsupervised learning?

  o What aspects of the development of visual systems must be
    genetically pre-wired, and what aspects can be guided or directed
    by visual experience?

  o How is the output of an unsupervised network stage used in
    subsequent stages of processing?

  o How can behaviorally relevant (sensorimotor) criteria become
    incorporated into visual processing mechanisms, using unsupervised
    learning?

This 2-day informal workshop brings together researchers in visual
neuroscience, visual psychophysics, and neural network modeling.
Invited speakers from these communities will briefly discuss their
views and results on relevant topics.  In discussion periods, we will
examine and compare these results in detail.

The workshop topic is crucial to our understanding of how animal
visual systems got the way they are.  By addressing this issue
head-on, we may come to understand better the factors that shape the
structure of animal visual systems, and we may become able to build
better computational models of the neurophysiological processes
underlying vision.


----------------------------------------------------------------------


			     PROGRAM


	   FRIDAY MORNING, December 6, 7:30-9:30 a.m.

Daniel Kersten, Department of Psychology, University of Minnesota.
  "Environmental structure and scene perception: Perceptual
   representation of material, shape, and lighting"

David C. Knill, Center for Research in Learning, Perception, and
Cognition, University of Minnesota.
  "Environmental structure and scene perception: The nature of
   visual cues for 3-D scene structure"

DISCUSSION

Edward M. Callaway, Department of Neurobiology, Duke University.
  "Development of clustered intrinsic connections in cat striate
   cortex"

Michael P. Stryker, Department of Physiology, University of
California at San Francisco.
  "Problems and promise of relating theory to experiment in models
   for the development of visual cortex"

DISCUSSION


	  FRIDAY AFTERNOON, December 6, 4:30-6:30 p.m.

Joachim M. Buhmann, Lawrence Livermore National Laboratory.
  "Complexity optimized data clustering by competitive neural
   networks"

Nicol G. Schraudolph, Department of Computer Science,
University of California at San Diego.
  "The information transparency of sigmoidal nodes"

DISCUSSION

Heinrich H. Bulthoff, Department of Cognitive and Linguistic
Sciences, Brown University.
  "Psychophysical support for a 2D view interpolation theory of
   object recognition"

John E. Hummel, Department of Psychology, University of
California at Los Angeles.
  "Structural description and self organizing object
   classification"

DISCUSSION


	  SATURDAY MORNING, December 7, 7:30-9:30 a.m.

Allan Dobbins, Computer Vision and Robotics Laboratory, McGill
University.
  "Local estimation of binocular optic flow"

Alice O'Toole, School of Human Development, The University of
Texas at Dallas.
  "Recent psychophysics suggesting a reformulation of the
   computational problem of structure-from-stereopsis"

DISCUSSION

Jonathan A. Marshall, Department of Computer Science, University
of North Carolina at Chapel Hill.
  "Development of perceptual context-sensitivity in unsupervised
   neural networks: Parsing, grouping, and segmentation"

Suzanna Becker, Department of Computer Science, University of
Toronto.
  "Learning perceptual invariants in unsupervised connectionist
   networks"

Albert L. Nigrin, Department of Computer Science and Information
Systems, American University.
  "Using Presynaptic Inhibition to Allow Neural Networks to Perform   
   Translational Invariant Recognition

DISCUSSION


	 SATURDAY AFTERNOON, December 7, 4:30-7:00 p.m.

Jurgen Schmidhuber, Department of Computer Science, University
of Colorado.
  "Learning non-redundant codes by predictability minimization"

Laurence T. Maloney, Center for Neural Science, New York
University.
  "Geometric calibration of a simple visual system"

DISCUSSION

Paul Munro, Department of Information Science, University of
Pittsburgh.
  "Self-supervised learning of concepts"

Richard Zemel, Department of Computer Science, University of
Toronto.
  "Learning to encode parts of objects"

DISCUSSION


WRAP-UP, 6:30-7:00


----------------------------------------------------------------------

			    (Please post)



More information about the Connectionists mailing list