two research reports available
john moody
moody-john at YALE.ARPA
Tue Mar 21 16:11:08 EST 1989
********* FOR CONNECTIONISTS ONLY - PLEASE DO NOT FORWARD ***********
**************** TO OTHER BBOARDS/ELECTRONIC MEDIA *******************
FAST LEARNING IN MULTI-RESOLUTION HIERARCHIES
John Moody
Research Report YALEU/DCS/RR-681, February 1989
ABSTRACT
A class of fast, supervised learning algorithms is presented.
They use local representations, hashing, and multiple scales of
resolution to approximate functions which are piece-wise continu-
ous. Inspired by Albus's CMAC model, the algorithms learn orders
of magnitude more rapidly than typical implementations of back
propagation, while often achieving comparable qualities of gen-
eralization. Furthermore, unlike most traditional function ap-
proximation methods, the algorithms are well suited for use in
real time adaptive signal processing. Unlike simpler adaptive
systems, such as linear predictive coding, the adaptive linear
combiner, and the Kalman filter, the new algorithms are capable
of efficiently capturing the structure of complicated non-linear
systems. As an illustration, the algorithm is applied to the
prediction of a chaotic timeseries.
NOTE: This research report will appear in Advances in Neural In-
formation Processing Systems, edited by David Touretzky, to be
published in April 1989 by Morgan Kaufmann Publishers, Inc. The
author gratefully acknowledges financial support under ONR grant
N00014-89-J-1228, ONR grant N00014-86-K-0310, AFOSR grant
F49620-88-C0025, and a Purdue Army subcontract.
***********************************************************
FAST LEARNING IN NETWORKS OF LOCALLY-TUNED PROCESSING UNITS
John Moody and Christian J. Darken
Research Report YALEU/DCS/RR-654, October 1988, Revised March
1989
ABSTRACT
We propose a network architecture which uses a single internal
layer of locally-tuned processing units to learn both classifica-
tion tasks and real-valued function approximations We consider
training such networks in a completely supervised manner, but
abandon this approach in favor of a more computationally effi-
cient hybrid learning method which combines self-organized and
supervised learning. Our networks learn faster than back propa-
gation for two reasons: the local representations ensure that
only a few units respond to any given input, thus reducing compu-
tational overhead, and the hybrid learning rules are linear rath-
er than nonlinear, thus leading to faster convergence. Unlike
many existing methods for data analysis, our network architecture
and learning rules are truly adaptive and are thus appropriate
for real-time use.
NOTE: This research report will appear in Neural Computation, a
new Journal edited by Terry Sejnowski and published by MIT Press.
The work was supported by ONR grant N00014-86-K-0310, AFOSR grant
F49620-88-C0025, and a Purdue Army subcontract.
***********************************************************
Copies of both reports can be obtained by sending a request to:
Judy Terrell
Yale Computer Science
PO Box 2158 Yale Station
New Haven, CT 06520
(203)432-1200
e-mail:
terrell at cs.yale.edu
terrell at yale.arpa
terrell at yalecs.bitnet
********* FOR CONNECTIONISTS ONLY - PLEASE DO NOT FORWARD ***********
**************** TO OTHER BBOARDS/ELECTRONIC MEDIA *******************
-------
More information about the Connectionists
mailing list