No subject
Tue Jun 6 06:52:25 EDT 2006
Your abstract is very interesting. It sounds like it will be a
great discussion. The idea of using a different kind of learning
which explicitly stored training data is one I've worked on in
the past. A few questions that crossed my mind while reading
the abstract are listed below:
On Wed, 23 Apr 1997, Asim Roy wrote:
> Classical connectionist learning is based on two key ideas.
>First, no training examples are to be stored by the learning
>algorithm in its memory (memoryless learning).
I'm a bit unclear about this. Aren't the weights of the network
trained to implicitly store ALL the training examples? I would
have said that that connectionist learning is based on the idea
that "ALL training examples are to be stored by the learning
algorithm in its memory"!
If I understand correctly, the first key idea is that the
training examples are not EXPLICITLY stored in a way in which
they could be retrieved or reconstructed. Perhaps my confusion
lies in the word stored. How would you define that?
I would further say that a number of dynamical recurrent
networks like those discussed at my NIPS workshop =0A(http://running.dgcd.d=
oc.ca/NIPS/) do explictly store presented
examples. Infact, training algorithms like back-propagation
through time have been criticized for having to explicitly store
previous input and hidden unit patterns and thus consume extra
memory resources. But, I guess you're probably aware of this
since you have Lee on your panel.
> The second key idea is that of local
> learning - that the nodes of a network are autonomous learners.
> Local learning embodies the viewpoint that simple, autonomous
> learners, such as the single nodes of a network, can in fact
> produce complex behavior in a collective fashion. This second
>idea, in its purest form, implies a predefined net being provided
>to the algorithm for learning, such as in multilayer perceptrons.
In what sense are the learners autonomous? In the MLP each
learner requires a feedback error value provided by another
node (and ultimately an outside source) in-order to update.
I would say its NOT autonomous.
> Second, strict local learning (e.g. back propagation type
>learning) is not a feasible idea for any system, biological
>or otherwise.
If it is not feasible for "any" learning system then any system
which attempts to use it must fail. Therefor, working
connectionist networks must not use strict local learning.
Therefore, strict local learning cannot be one of the
fundamental ideas of the connectionist approach. Therefore why
are we discussing it?
I must have misunderstood something here...any ideas where I
went off-track?
-------------
Dr. Stefan C. Kremer, Research Scientist, Artificial Neural Systems
Communications Research Centre, 3701 Carling Ave.,
P.O. Box 11490, Station H, Ottawa, Ontario K2H 8S2
WWW: http://running.dgcd.doc.ca/~kremer/index.html
Tel: (613)990-8175 Fax: (613)990-8369 E-mail:
Stefan.Kremer at crc.doc.ca
===============================================================
More information about the Connectionists
mailing list