Compressibility and Generalization

Lev Goldfarb goldfarb at unb.ca
Wed Dec 6 15:54:00 EST 1995


On Tue, 5 Dec 1995, Juergen Schmidhuber wrote:

> ``compressibility of the history of a universe''. 
> 
> There are a few compressible or ``regular'' universes, 
> however. To use ML terminology, some of them allow for 
> ``generalization by analogy''. Some of them allow for 
> ``generalization by chunking''. Some of them allow for
> ``generalization by exploiting invariants''. Etc. It
> would be nice to have a method that can generalize well
> in *arbitrary* regular universes.

For a proposal how to capture formally the concept of an "arbitrary
regular universe" for the purposes of inductive learning (and
generalization), i.e.  the concept of a "combinative" representation in a
universe, see the two references below as well as the original two papers
published in Pattern Recognition (and mentioned in each of the two
references). The structure of objects in the universe was discussed on the
INDUCTIVE list. 

It appears, that the concept of a "symbolic" representation has to be
formalized first (via the concept of transformation system), and the
fundamentally new concept of *inductive class structure*, not present in
other ML models, becomes of critical importance. The issue of dynamic
object representation, so conspicuously (and not surprisingly) absent from
the ongoing (classical) "statistical" discussion of inductive learning, is 
also brought to the fore. 

1. L. Goldfarb and S. Nigam, The unified learning paradigm: A foundation 
   for AI, in V. Honavar and L. Uhr, eds., Artificial Intelligence and 
   Neural Networks: Steps toward Principled Integration, Academic Press, 
   1994.
2. L. Goldfarb , J. Abela, V.C. Bhavsar, V.N. Kamat, Can a vector space 
   based learning model discover inductive class generalization in a 
   symbolic environment? Pattern Recognition Letters 16, 719-726, 1995.




-- Lev Goldfarb 


More information about the Connectionists mailing list