batch-continous-one shot
ruizdeangulo%ispgro.cern.ch@BITNET.CC.CMU.EDU
ruizdeangulo%ispgro.cern.ch at BITNET.CC.CMU.EDU
Wed Oct 30 04:55:26 EST 1991
Referring to the batch-continous-one-shot learning discussion, in the reference
bellow we describe an algorithm that can be labeled as one-shot learning. I
think it fits well with the Plutowski and White method described recently.
>What we do (as reported in the tech report by Plutowski & White)
>is sequentially grow the training set, first finding
>an "optimal" training set of size 1, then fitting the network to this
>training set, appending the training set with a new exemplar selected from
>the set of available candidates, obtaining a training set of size 2 which
>is "approximately optimal", fitting this set, appending a third exemplar,
etc,
>continuing the process until the network fit obtained by training over the
>exemplars fits the rest of the available examples within the desired tolerance.
The MDL (Minimal disturbance learning) algorithm introduces a new exemplar
minimizing an estimation of the loss function (error increment) over the old
patterns.It makes a little search for this optimization but whatever the
stopping point(for this search), perfect recall of the new exemplar is gotten.
The network is not forced to assume any special kind of local-representation.
Ruiz de Angulo,V.,Torras, C.(1991) Minimally disturbing Learning. In the
proceedings of the IWANN 91.Springer Verlag
More information about the Connectionists
mailing list