Papers on visual occlusion and neural networks

Jonathan A. Marshall marshall at cs.unc.edu
Wed Feb 2 12:41:49 EST 1994


Dear Colleagues,

Below I list two new papers that I have added to the Neuroprose archives
(thanks to Jordan Pollack!).  In addition, I list two of my older papers
in Neuroprose.  You can retrieve a copy of these papers -- follow the
instructions at the end of this message.

--Jonathan

----------------------------------------------------------------------------

marshall.occlusion.ps.Z  (5 pages)

	      A SELF-ORGANIZING NEURAL NETWORK THAT LEARNS TO
	  DETECT AND REPRESENT VISUAL DEPTH FROM OCCLUSION EVENTS
				      
		JONATHAN A. MARSHALL  and  RICHARD K. ALLEY
				      
	  Department of Computer Science, CB 3175, Sitterson Hall
      University of North Carolina, Chapel Hill, NC 27599-3175, U.S.A.
		   marshall at cs.unc.edu, alley at cs.unc.edu

Visual occlusion events constitute a major source of depth information.  We
have developed a neural network model that learns to detect and represent
depth relations, after a period of exposure to motion sequences containing
occlusion and disocclusion events.  The network's learning is governed by a
new set of learning and activation rules.  The network develops two parallel
opponent channels or "chains" of lateral excitatory connections for every
resolvable motion trajectory.  One channel, the "On" chain or "visible"
chain, is activated when a moving stimulus is visible.  The other channel,
the "Off" chain or "invisible" chain, is activated when a formerly visible
stimulus becomes invisible due to occlusion.  The On chain carries a
predictive modal representation of the visible stimulus.  The Off chain
carries a persistent, amodal representation that predicts the motion of the
invisible stimulus.  The new learning rule uses disinhibitory signals
emitted from the On chain to trigger learning in the Off chain.  The Off
chain neurons learn to interact reciprocally with other neurons that
indicate the presence of occluders.  The interactions let the network
predict the disappearance and reappearance of stimuli moving behind
occluders, and they let the unexpected disappearance or appearance of
stimuli excite the representation of an inferred occluder at that location.
Two results that have emerged from this research suggest how visual systems
may learn to represent visual depth information.  First, a visual system can
learn a nonmetric representation of the depth relations arising from
occlusion events.  Second, parallel opponent On and Off channels that
represent both modal and amodal stimuli can also be learned through the same
process.

[In Bowyer KW & Hall L (Eds.), Proceedings of the AAAI Fall Symposium on
 Machine Learning and Computer Vision, Research Triangle Park, NC, October
 1993, 70-74.]

----------------------------------------------------------------------------

marshall.context.ps.Z  (46 pages)

		  ADAPTIVE PERCEPTUAL PATTERN RECOGNITION
		    BY SELF-ORGANIZING NEURAL NETWORKS:
	       CONTEXT, UNCERTAINTY, MULTIPLICITY, AND SCALE
				      
			    JONATHAN A. MARSHALL

	  Department of Computer Science, CB 3175, Sitterson Hall
      University of North Carolina, Chapel Hill, NC 27599-3175, U.S.A.
			    marshall at cs.unc.edu

A new context-sensitive neural network, called an "EXIN" (excitatory+
inhibitory) network, is described.  EXIN networks self-organize in complex
perceptual environments, in the presence of multiple superimposed patterns,
multiple scales, and uncertainty.  The networks use a new inhibitory
learning rule, in addition to an excitatory learning rule, to allow
superposition of multiple simultaneous neural activations (multiple
winners), under strictly regulated circumstances, instead of forcing
winner-take-all pattern classifications.  The multiple activations represent
uncertainty or multiplicity in perception and pattern recognition.
Perceptual scission (breaking of linkages) between independent category
groupings thus arises and allows effective global context-sensitive
segmentation and constraint satisfaction.  A Weber Law neuron-growth rule
lets the network learn and classify input patterns despite variations in
their spatial scale.  Applications of the new techniques include
segmentation of superimposed auditory or biosonar signals, segmentation of
visual regions, and representation of visual transparency.

[Submitted for publication.]

----------------------------------------------------------------------------

marshall.steering.ps.Z  (16 pages)

    CHALLENGES OF VISION THEORY:  SELF-ORGANIZATION OF NEURAL MECHANISMS
  FOR STABLE STEERING OF OBJECT-GROUPING DATA IN VISUAL MOTION PERCEPTION

			    JONATHAN A. MARSHALL

[Invited paper, in Chen S-S (Ed.), Stochastic and Neural Methods in Signal
 Processing, Image Processing, and Computer Vision, Proceedings of the SPIE
 1569, San Diego, July 1991, 200-215.]

----------------------------------------------------------------------------

martin.unsmearing.ps.Z  (8 pages)

			 UNSMEARING VISUAL MOTION:
	 DEVELOPMENT OF LONG-RANGE HORIZONTAL INTRINSIC CONNECTIONS

		 KEVIN E. MARTIN  and  JONATHAN A. MARSHALL

[In Hanson SJ, Cowan JD, & Giles CL, Eds., Advances in Neural Information
 Processing Systems, 5.  San Mateo, CA: Morgan Kaufmann Publishers, 1993,
 417-424.]

----------------------------------------------------------------------------



RETRIEVAL INSTRUCTIONS

    % ftp archive.cis.ohio-state.edu
    Name (cheops.cis.ohio-state.edu:yourname): anonymous
    Password: (use your email address)
    ftp> cd pub/neuroprose
    ftp> binary
    ftp> get marshall.occlusion.ps.Z
    ftp> get marshall.context.ps.Z
    ftp> get marshall.steering.ps.Z
    ftp> get martin.unsmearing.ps.Z
    ftp> quit
    % uncompress marshall.occlusion.ps.Z ; lpr marshall.occlusion.ps
    % uncompress marshall.context.ps.Z ;   lpr marshall.context.ps
    % uncompress marshall.steering.ps.Z ;  lpr marshall.steering.ps
    % uncompress martin.unsmearing.ps.Z ;  lpr martin.unsmearing.ps



More information about the Connectionists mailing list