Summary (long): pattern recognition comparisons

Leonard Uhr uhr at cs.wisc.edu
Fri Aug 3 15:18:11 EDT 1990


Neural nets using backprop have only handled VERY SIMPLE images, usually in
8-by-8 arrays.  (We've used 32-by-32 arrays to investigate generation in
logarithmically converging nets, but I don't know of any nets with complete
connectivity from one layer to the next that are that big.)  In sharp contrast,
pr/computer vision systems are designed to handle MUCH MORE COMPLEX images (e.g.
houses, furniture) in 128-by-128 or even larger inputs.  So I've been really
surprised to read statements to the effect NN have proved to be much better.
What experimental evidence is there that NN recognize images as complex as those
handled by computer vision and pattern recognition approaches?

  True it's hard to run good comparative experiments, but without them where are
we?  NN re-introduce learning, which is great - except that to make learning
work we need to cut down and direct the explosive search at least as much as
using any other approach.  The brain is THE bag of tools that does the trick,
and it has a lot of structure (hierarchical convergence-divergence; local links to relatively small numbers; families of feature-detectors) that can
substantially improve today's nets.  More powerfufl structures, basic processes,
and learning mechanisms are essential to replace weak learning algorithms like
delta and backprop that need O(N*N) links to guarantee (eventual) success -
hence can't even be run on images with more than a few hundred pixels.

Len Uhr


More information about the Connectionists mailing list