"constructive" vs. "compositional" learning

INS_ATGE%JHUVMS.BITNET@VMA.CC.CMU.EDU INS_ATGE%JHUVMS.BITNET at VMA.CC.CMU.EDU
Thu Dec 13 21:44:00 EST 1990


Something to keep in mind when describing "constructive", "destructive",
or "ontogenic" networks is the nature of learning.  In traditional
gradient-descent learning of fixed networks, the learning algorithm
finds the minimum (or minima) of a fixed energy landscape.  In these
"constructive" or "destructive" networks, learning algorithms
develop an energy landscape specifically designed to allow gradient-descent
methods to best solve the problem (or at least that is what _should_
be happening).

"Compositional Learning" (as used by J. Schmidhuber), is a method in which
useful sub-goals are developed and utilized to solve a larger goal.
(By stringing together these sub-goals).  In methods such as
Cascade-Correlation, new hidden units are added which serve to change the
error landscape so as to best allow gradient-descent methods to find
energy minima.  But examining Cascade-Correlation in another light,
we can say it is developing feature detectors which represent useful
subgoals.  Proper connection of these useful subgoals together allow
us to reach our final goal (error minimization).

-Thomas Edwards



More information about the Connectionists mailing list