Tech Report Announcement: CMU-CS-90-100
Scott.Fahlman@SEF1.SLISP.CS.CMU.EDU
Scott.Fahlman at SEF1.SLISP.CS.CMU.EDU
Fri Mar 9 16:15:43 EST 1990
*** Please do not forward this to other mailing lists or newsgroups. ***
Tech Report CMU-CS-90-100 is now available, after some unfortunate delays
in preparation. This is a somewhat more detailed version of the paper that
will be appearing soon in "Advances in Neural Information Processing
Systems 2" (also known as the NIPS-89 Proceedings).
To request a copy of the TR, send a note containing the TR number and your
physical mail address to "catherine.copetas at cs.cmu.edu". Please *try* not
to respond to the whole mailing list.
People who requested a preprint of our paper at the NIPS conference should
be getting this TR soon, so please don't send a redundant request right
away. If you don't get something in a week or two, then try again.
I'll be making an announcement to this list sometime soon about how to get
Common Lisp code implementing the algorithm described in this TR. No C
version is available at present.
===========================================================================
The Cascade-Correlation Learning Architecture
Scott E. Fahlman and Christian Lebiere
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
Technical Report CMU-CS-90-100
ABSTRACT
Cascade-Correlation is a new architecture and supervised learning algorithm
for artificial neural networks. Instead of just adjusting the weights in a
network of fixed topology, Cascade-Correlation begins with a minimal
network, then automatically trains and adds new hidden units one by one,
creating a multi-layer structure. Once a new hidden unit has been added to
the network, its input-side weights are frozen. This unit then becomes a
permanent feature-detector in the network, available for producing outputs
or for creating other, more complex feature detectors. The
Cascade-Correlation architecture has several advantages over existing
algorithms: it learns very quickly, the network determines its own size and
topology, it retains the structures it has built even if the training set
changes, and it requires no back-propagation of error signals through the
More information about the Connectionists
mailing list