TR: Bayesian Inference on Visual Grammars by NNs that Optimize

Eric Mjolsness mjolsness-eric at CS.YALE.EDU
Wed Jun 5 15:50:55 EDT 1991


The following paper is available in the neuroprose archive as
mjolsness.grammar.ps.Z:


	    Bayesian Inference on Visual Grammars
	        by Neural Nets that Optimize


			Eric Mjolsness
        	Department of Computer Science
        		Yale University
        	   New Haven, CT 06520-2158

			YALEU/DCS/TR854
			    May 1991

Abstract:

We exhibit a systematic way to derive neural nets for vision
problems.  It involves formulating a vision problem as Bayesian
inference or decision on a comprehensive model of the visual domain
given by a probabilistic {\it grammar}.  A key feature of this
grammar is the way in which it eliminates model information, such
as object labels, as it produces an image; correspondance problems
and other noise removal tasks result.  The neural nets that arise
most directly are generalized assignment networks.  Also there are
transformations which naturally yield improved algorithms such as
correlation matching in scale space and the Frameville neural nets
for high-level vision.  Deterministic annealing provides an effective
optimization dynamics.  The grammatical method of  neural net design
allows domain knowledge to enter from all levels of the grammar,
including ``abstract'' levels remote from the final image data, and
may permit new kinds of learning as well.


The paper is 56 pages long.

To get the file from neuroprose:

              unix> ftp cheops.cis.ohio-state.edu (or 128.146.8.62)
              Name: anonymous
              Password: neuron
              ftp> cd pub/neuroprose
              ftp> binary
              ftp> get mjolsness.grammar.ps.Z
              ftp> quit
              unix> uncompress mjolsness.grammar.ps.Z
              unix> lpr mjolsness.grammar.ps (or however you print postscript)

-Eric

-------



More information about the Connectionists mailing list