New Book: Neural Network Learning ...

steve gallant sg at corwin.CCS.Northeastern.EDU
Tue Jan 19 11:59:10 EST 1993


        NEURAL NETWORK LEARNING
          And Expert Systems

           by Steve Gallant

	The book is intended as a text, reference, and a collection of some
of my work.


	CONTENTS

PART I:  Basics

1  Introduction and Important Definitions 

1.1  Why Connectionist Models?
1.2  The Structure of Connectionist Models 
1.3  Two Fundamental Models:  Multi-Layer Perceptrons and Backpropagation Networks
1.4  Gradient Descent 
1.5  Historic and Bibliographic Notes 
1.6  Exercises
1.7  Programming Project 

2  Representation Issues 

2.1  Representing Boolean Functions 
2.2  Distributed Representations 
2.3  Feature Spaces and ISA Relations 
2.4  Representing Real-Valued Functions
2.5  Example:  Taxtime! 
2.6  Exercises
2.7  Programming Projects

PART II:  Learning in Single Layer Models

3  Perceptron Learning and the Pocket Algorithm 

3.1  Introduction
3.2  Perceptron Learning for Separable Sets of Training Examples 
3.3  The Pocket Algorithm for Non-separable Sets of Training Examples 
3.4  Khachiyan's Linear Programming Algorithm 
3.5  Exercises
3.6  Programming Projects

4  Winner-Take-All Groups or Linear Machines 

4.1  Introduction
4.2  Generalizes Single-Cell Models 
4.3  Perceptron Learning for Winner-Take-All Groups
4.4  The Pocket Algorithm for Winner-Take-All Groups 
4.5  Kessler's Construction, Perceptron Cycling, and the Pocket Algorithm Proof 
4.6  Independent Training
4.7  Exercises
4.8  Programming Projects

5  Autoassociators and One-Shot Learning 

5.1  Introduction
5.2  Linear Autoassociators and the Outer Product Training Rule 
5.3  Anderson's BSB Model
5.4  Hopfield's Model 
5.5  The Traveling Salesman Problem
5.6  The Cohen-Grossberg Theorem
5.7  Kanerva's Model 
5.8  Autoassociative Filtering for Feed-Forward Networks 
5.9  Concluding Remarks
5.10  Exercises
5.11  Programming Projects

6  Mean Squared Error (MSE) Algorithms 

6.1  Motivation
6.2  MSE Approximations
6.3  The Widrow-Hoff Rule or LMS Algorithm 
6.4  ADALINE
6.5  Adaptive noise cancellation 
6.6  Decision-directed learning 
6.7  Exercises
6.8  Programming Projects

7  Unsupervised Learning 

7.1  Introduction 
7.2  k-Means Clustering 
7.3  Topology Preserving Maps 
7.4  ART1 
7.5  ART2 
7.6  Using Clustering Algorithms for Supervised Learning
7.7  Exercises
7.8  Programming Projects

PART III:  Learning in Multi-Layer Models

8  The Distributed Method and Radial Basis Functions 

8.1  Rosenblatt's Approach
8.2  The Distributed Method 
8.3  Examples
8.4  How Many Cells? 
8.5  Radial Basis Functions 
8.6  A Variant: The Anchor Algorithm 
8.7  Scaling, Multiple Outputs and Parallelism
8.8  Exercises
8.9  Programming Projects

9  Computational Learning Theory and the BRD Algorithm 

9.1  Introduction to Computational Learning Theory
9.2  A Learning Algorithm for Probabilistic Bounded Distributed Concepts 
9.3  The BRD Theorem 
9.4  Noisy Data and Fallback Estimates 
9.5  Bounds for Single-Layer Algorithms 
9.6  Fitting Data by Limiting the Number of Iterations 
9.7  Discussion
9.8  Exercises
9.9  Programming Project

10  Constructive Algorithms 

10.1  The Tower and Pyramid Algorithms
10.2  The Cascade-Correlation Algorithm 
10.3  The Tiling Algorithm 
10.4  The Upstart Algorithm 
10.5  Pruning
10.6  Easy Learning Problems 
10.7  Exercises
10.8  Programming Projects

11  Backpropagation 

11.1  Introduction
11.2  The Backpropagation Algorithm 
11.3  Derivation 
11.4  Practical Considerations 
11.5  NP-Completeness 
11.6  Comments
11.7  Exercises
11.8  Programming Projects

12  Backpropagation: Variations and Applications 

12.1  NETtalk 
12.2  Backpropagation Through Time 
12.3  Handwritten character recognition 
12.4  Robot manipulator with excess degrees of freedom 
12.5  Exercises
12.6  Programming Projects

13  Simulated Annealing and Boltzmann Machines 

13.1  Simulated Annealing
13.2  Boltzmann Machines 
13.3  Remarks
13.4  Exercises
13.5  Programming Project

PART IV:  Neural Network Expert Systems

14  Expert Systems and Neural Networks 

14.1  Expert Systems 
14.2  Neural Network Decision Systems 
14.3  MACIE, and an Example Problem 
14.4  Applicability of Neural Network Expert Systems
14.5  Exercises
14.6  Programming Projects

15  Details of the MACIE System 

15.1  Inferencing and Forward Chaining 
15.2  Confidence Estimation 
15.3  Information Acquisition and Backward Chaining 
15.4  Concluding Comment
15.5  Exercises
15.6  Programming Projects

16  Noise, Redundancy, Fault Detection, and Bayesian Decision Theory 

16.1  Introduction
16.2  The High Tech Lemonade Corporation's Problem
16.3  The Deep Model and the Noise Model
16.4  Generating the Expert System 
16.5  Probabilistic Analysis
16.6  Noisy Single-pattern Boolean Fault Detection Problems
16.7  Convergence Theorem
16.8  Comments 
16.9  Exercises
16.10  Programming Projects

17  Extracting Rules From Networks 

17.1  Why Rules?
17.2  What kind of Rules?
17.3  Inference Justifications 
17.4  Rule Sets 
17.5  Conventional + Neural Network Expert Systems 
17.6  Concluding Remarks
17.7  Exercises
17.8  Programming Projects

18  Appendix: Representation Comparisons 

18.1  DNF Expressions and Polynomial Representability 
18.2  Decision Trees
18.3  Pi-Lambda Diagrams
18.4  Symmetric Functions and Depth Complexity
18.5  Concluding Remarks
18.6  Exercises

  References

364 pages, 156 figures.


	Available from MIT Press by calling (800) 356-0343
or (617) 625-8569.  

	A great stocking-stuffer, especially for friends with 
wide, flat ankles.

	SG


More information about the Connectionists mailing list