book announcement: Neural Network Learning

Richard Knott rknott at cup.cam.ac.uk
Tue Aug 17 06:47:06 EDT 1999


Neural Network Learning
Theoretical Foundations
Martin Anthony
London School of Economics and Political Science
and Peter Bartlett
Australian National University

This book describes recent theoretical advances in the study of artificial
neural networks. It explores probabilistic models of supervised learning
problems, and addresses the key statistical and computational questions.
Research on pattern classification with binary-output networks is surveyed,
including a discussion of the relevance of the Vapnik-Chervonenkis
dimension, and calculating estimates of the dimension for several neural
network models. A model of classification by real-output networks is
developed, and the usefulness of classification with a 'large margin' is
demonstrated. The authors explain the role of scale-sensitive versions of
the Vapnik-Chervonenkis dimension in large margin classification, and in
real prediction. They also discuss the computational complexity of neural
network learning, describing a variety of hardness results, and outlining
two efficient constructive learning algorithms. The book is self-contained
and is intended to be accessible to researchers and graduate students in
computer science, engineering, and mathematics.

Contents: 1. Introduction; Part I. Pattern Recognition with Binary-output
Neural Networks: 2. The pattern recognition problem; 3. The growth function
and VC-dimension; 4. General upper bounds on sample complexity; 5. General
lower bounds; 6. The VC-dimension of linear threshold networks; 7. Bounding
the VC-dimension using geometric techniques; 8. VC-dimension bounds for
neural networks; Part II. Pattern Recognition with Real-output Neural
Networks: 9. Classification with real values; 10. Covering numbers and
uniform convergence; 11. The pseudo-dimension and fat-shattering dimension;
12. Bounding covering numbers with dimensions; 13. The sample complexity of
classification learning; 14. The dimensions of neural networks; 15. Model
selection; Part III. Learning Real-Valued Functions: 16. Learning classes
of real functions; 17. Uniform convergence results for real function
classes; 18. Bounding covering numbers; 19. The sample complexity of
learning function classes; 20. Convex classes; 21. Other learning problems;
Part IV. Algorithmics: 22. Efficient learning; 23. Learning as
optimisation; 24. The Boolean perceptron; 25. Hardness results for
feed-forward networks; 26. Constructive learning algorithms for two-layered
networks.

1999   228 x 152 mm   416pp
0 521 57353 X	Hardback

For further information see http://www.cup.cam.ac.uk or http://www.cup.org

****************************************************************************
Richard Knott
STM Marketing Dept.
Cambridge University Press
The Edinburgh Building
Cambridge CB2 2RU, UK
email:rknott at cup.cam.ac.uk
tel: ++44 (0)1223 325916
fax: ++44 (0)1223 315052
Web: http://www.cup.cam.ac.uk
****************************************************************************




More information about the Connectionists mailing list