PAC learning in NNs survey
Arun Jagota
jagota at cse.ucsc.edu
Mon May 12 12:27:00 EDT 1997
The following refereed paper (47 pages, 118 references) is now available,
in postscript form, from the Neural Computing Surveys web site:
http://www.icsi.berkeley.edu/~jagota/NCS
Probabilistic Analysis of Learning in Artificial Neural Networks:
The PAC Model and its Variants
Martin Anthony
Department of Mathematics,
The London School of Economics and Political Science,
There are a number of mathematical approaches to the study of learning and
generalization in artificial neural networks. Here we survey the `probably
approximately correct' (PAC) model of learning and some of its variants.
These models provide a probabilistic framework for the discussion of
generalization and learning. This survey concentrates on the sample
complexity questions in these models; that is, the emphasis is on how many
examples should be used for training. Computational complexity considerations
are briefly discussed for the basic PAC model. Throughout, the importance of
the Vapnik-Chervonenkis dimension is highlighted. Particular attention is
devoted to describing how the probabilistic models apply in the context of
neural network learning, both for networks with binary-valued output and
for networks with real-valued output.
More information about the Connectionists
mailing list