Summary (long): pattern recognition comparisons

Vasant Honavar honavar at cs.wisc.edu
Sun Aug 5 15:48:37 EDT 1990


>It is my understanding that some of the latest work of Hal White et al.
>presents a learning algorithm - backprop plus a rule for adding hidden
>units - that can (in the limit) provably learn any function of interest.
>(Disclaimer: I don't have the mathematical proficiency required to fully
>appreciate White et al.'s proofs and thus have to rely on second-hand
>interpretations.)

	I can see how allowing the addition of (potentially unbounded
number of hidden units) could enable a back-prop architecture to learn
arbitrary functions. But in this sense, any procedure that builds up
a look-up table or random-access memory (with some interpolation
capability to cover the instances not explicitly stored) using an 
appropriate set of rules to add units is equally general 
(and probably more efficient than backprop in terms of time complexity 
of learning (cf Baum's proposal for more powerful learning algorithms). 
However look-up tables can be combinatorially intractable in terms of 
memory (space) complexity. This brings us to the issue of searching the 
architectural space along with the weight space in an efficient manner.  
There has already been some work in this direction (Fahlman's cascade
correlation architecture, Ash's DNC, Honavar &  Uhr's generative learning,
Hanson's meiosis networks, and some recent work on ga-nn hybrids). 
We have been investigating methods to constrain the search in the 
architectural space (using heuristic controls / representational bias :-) ). 
I would like to hear from others who might be working on related issues.

Vasant Honavar (honavar at cs.wisc.edu)




More information about the Connectionists mailing list