CONNECTIONIST LEARNING: IS IT TIME TO RECONSIDER THE FOUNDATIONS?

Asim Roy ataxr at IMAP1.ASU.EDU
Mon May 26 13:21:26 EDT 1997


Dave,

I am posting the responses I have received so far. Some of
the responses provide a great deal of insight on connectionist
learning and neuroscience and their interactions (see, in
particular, the second note by Dr. Peter Cariani). I have also
tried to provide answers to two frequently asked questions.

I hope all of this will generate more interest in the questions
being raised about connectionist learning. As you can see below,
perhaps other questions need to be raised. The original posting is
attached below for reference.

Asim Roy
Arizona State University
============================
ANSWERS TO SOME FREQUENTLY ASKED QUESTIONS:

a) Humans get stuck in local minima all the time. So what is
wrong with algorithms getting stuck in local minima?

RESPONSE:

We can only claim that humans are sometimes "unable to learn."
We cannot make any claim beyond that. And so this phenomenon of
"unable to learn" does not necessarily imply "getting stuck in a
local minima." Inability to learn maybe due to a number of
reasons, including insufficient information, inability to extract
the relevant features of a problem, insufficient reward or
punishment, and so on. Again, to reiterate, "inability to learn"
does not imply "getting stuck in a local minima."

Perhaps this misconception has been promoted in order to
justify certain algorithms and their weak learning
characteristics.


b) Why do you say classical connectionist learning is
memoryless? Isn't memory actually in the weights?

RESPONSE:

Memoryless learning implies there is no EXPLICIT storage of any
learning example in the system in order to learn. In classical
connectionist learning, the weights of the net are adjusted
whenever a learning example is presented, but it is promptly
forgotten by the system. There is no EXPLICIT storage of any
presented example in the system. That is the generally accepted
view of "adaptive" or "on-line learning systems."

Imagine such a system "planted" in some human brain. And suppose
we want to train it to learn addition. So we provide the first
example - say, 2 + 2 promptly adjust the weights of the net and forgets the particular
example. It has done what it is supposed to do - adjust the
weights, given a learning example. Suppose, you then ask this
"human", fitted with this learning algorithm: "How much is
2 + 2?" Since it has only seen one example and has not yet fully
grasped the rule for adding numbers, it probably would give a
wrong answer. So you, as the teacher, perhaps might ask at that
point: "I just told you 2 + 2 remember?" And this "human" might respond: "Very honestly, I
don't recall you ever having said that! I am very sorry." And
this would continue to happen after every example you present
to this "human"!!!

So do you think there is memory in those "weights"? Do you think
humans are like that?

Please send any comments on these issues directly to me
(asim.roy at asu.edu). All comments/criticisms/suggestions are
welcome. And all good science depends on vigorous debate.

Asim Roy
Arizona State University

============================


More information about the Connectionists mailing list