Exploiting duality to analyze ANNs

Pankaj Mehra p-mehra at uiuc.edu
Wed Sep 11 16:57:53 EDT 1991


I had been waiting a long time to see a paper that will relate the
geometry of function spaces with the statistical theory of
approximation and estimation. I finally got what I was looking for in a
recent paper in the journal Neural Networks (vol. 4, pp. 443-451, 1991)
titled ``Dualistic Geometry of the Manifold of Higher-Order Neurons,''
by Amari.

I thought I will begin searching for additional references by sharing
a few pointers:

1. ``Applied Regression Analysis,'' (2nd ed.) by Draper and Smith
   pp 491, Chapter 10, An Intro to Nonlinear Estimation.

   The idea of sample spaces is introduced and the concepts of approximation
   and estimation errors explained in geometric terms.

2. ``Principled Constructive Induction,'' by [yours truly], Rendell, & Wah,
   Proc. IJCAI-89. (Extended abstract in Machine Learning Workshop, 1989.)

   Abstract ideas of Satosi Watanabe on object-predicate duality are
   given a concrete interpretation for learning systems. This paper
   introduces inverted spaces similar to sample spaces (somewhat customized
   for 2-class discrimination). A preliminary result relating the geometry
   and statistics of feature construction is proved.

3. ``Generalizing the PAC Model ...,'' by Haussler, in FOCS'89.

   The concept of combinatorial dimension, which measures the ability
   of a function class to cover the combinatorially many orthants of
   the sample space, is used for extending PAC learning ideas to analysis
   of continuous maps.
   [well, this is the way I interpret it]

Amari's work presents (in my opinion) an elegant treatment of
approximation theory. His proofs are limited to HONNs transforming
bipolar (-1,+1) inputs.  But he mentions technical reports describing
extensions to Boltzmann machines. (Can someone at Univ. of Tokyo send
me a copy?) (Also, can someone help me understand how eqn 3.2 of
Amari's paper follows from eqn 3.1?)

I'd like to hear about other approaches that exploit duality between
feature space and function space to characterize the behavior of neural
networks.

-Pankaj Mehra
Univ. Illinois



More information about the Connectionists mailing list