Rutgers Neural Network Seminar

noordewi@cs.rutgers.edu noordewi at cs.rutgers.edu
Wed Feb 6 18:45:45 EST 1991


			  RUTGERS UNIVERSITY
	    Dept. of Computer Science/Dept. of Mathematics

	  Neural Networks Colloquium Series --- Spring 1991

			     Stephen Judd
			       Siemens

		      The Complexity of Learning
		    in Families of Neural Networks

			       Abstract

     What exactly does it mean for Neural Networks to `learn'?

     We formalize a notion of learning that characterizes the simple
training of feed-forward Neural Networks.  The formulation is intended
to model the objectives of the current mode of connectionist research
in which one searches for powerful and efficient `learning rules' to
stick in the `neurons'.
     By showing the learning problem to be NP-complete, we demonstrate
that in general the set of things that a network can learn to do is
smaller than the set of things it can do.  No reasonable learning rule
exists to train all families of networks.
     Naturally this provokes questions about easier cases, and we
explore how the problem does or does not get easier as the neurons are
made more powerful, or as various constraints are placed on the
architecture of the network.  We study one particular family of
networks called `shallow architectures' which are defined in such a
way as to bound their depth but let them grow very wide -- a
description inspired by certain neuro-anatomical structures.
     The results seem to be robust in the face of all choices for what
the neurons are able to compute individually.

			  February 13, 1991
	       Busch Campus --- 4:30 p.m., room 217 SEC

		 host: Mick Noordewier (201/932-3698)
   finger noordewi at cs.rutgers.edu for further schedule information




More information about the Connectionists mailing list