Shift Invariance

Lee Giles giles at research.nj.nec.com
Mon Feb 26 13:01:25 EST 1996


We and others [1, 2, 3, 4] showed that invariances, actually affine
transformations, could directly be encoded into feedforward higher-order
(sometimes called polynomial, sigma-pi, gated, ...)  neural nets such that
these networks are invariant to shift, scale, and rotation of individual
patterns. As mentioned previously, similar invariant encodings can be had
for associative memories in autonomous recurrent networks.  Interestingly,
this idea of encoding geometric invariances into neural networks is an old
one [5].

[1] C.L. Giles, T. Maxwell,``Learning, Invariance, and Generalization in
High-Order Neural Networks'', Applied Optics, 26(23), p 4972, 1987.
Reprinted in ``Artificial Neural Networks: Concepts and Theory,'' eds. P.
Mehra and B. W. Wah, IEEE Computer Society Press, Los Alamitos, CA. 1992.

[2] C.L. Giles, R.D. Griffin, T. Maxwell,``Encoding Geometric Invariances
in Higher-Order Neural Networks'', Neural Information Processing Systems,
ed. D.Z. Anderson, Am. Inst. of Physics, N.Y., N.Y., p 301-309, 1988.
 
[3] S.J. Perantonis, P.J.G. Lisboa, ``Translation, Rotation, and Scale
Invariant Pattern Recognition by Higher-Order Neural Networks and Moment
Classifiers'', IEEE Transactions on Neural Networks'', 3(2), p 241, 1992.
 
[4] L. Spirkovska, M.B. Reid,``Higher-Order Neural Networks Applied to 2D
and 3D Object Recognition'', Machine Learning, 15(2), p. 169-200, 1994.

[5] W. Pitts, W.S. McCulloch, ``How We Know Universals: The Perception of
Auditory and Visual Forms'', Bulletin of Mathematical Biophysics, vol 9, p.
127, 1947.

Bibtex entries for the above can be found in:
ftp://external.nj.nec.com/pub/giles/papers/high-order.bib

--                                 
C. Lee Giles / Computer Sciences / NEC Research Institute / 
4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482
www.neci.nj.nec.com/homepages/giles.html
==




More information about the Connectionists mailing list