Summary (long): pattern recognition comparisons

Nici Schraudolph schraudo%cs at ucsd.edu
Sun Aug 5 05:54:43 EDT 1990


> From honavar at cs.wisc.edu Sat Aug  4 17:45:01 1990
> 
> While I know of theoretical results that show that a feedforward
> neural net exists that can adequately encode any arbitrary
> real-valued function (Hornick, Stinchcombe, & White, 1988;
> Cybenko, 1988; Carrol & Dickinson, 1989), I am not aware of
> any results that suggest that such nets can LEARN any real-vauled
> function using backpropagation (ignoring the issue of 
> computational tractability). 
> 
It is my understanding that some of the latest work of Hal White et al.
presents a learning algorithm - backprop plus a rule for adding hidden
units - that can (in the limit) provably learn any function of interest.
(Disclaimer: I don't have the mathematical proficiency required to fully
appreciate White et al.'s proofs and thus have to rely on second-hand
interpretations.)

> On a different note, how does one go about assessing the 
> "generality" of a learning algorithm/architecture in practice?
> I would like to see a discussion on this issue.
> 
I second this motion.  As a starting point for discussion, would the
Kolmogorov complexity of an architectural description be useful as a
measure of architectural bias?
--
Nici Schraudolph, C-014                nschraudolph at ucsd.edu
University of California, San Diego    nschraudolph at ucsd.bitnet
La Jolla, CA 92093                     ...!ucsd!nschraudolph


More information about the Connectionists mailing list