No subject


Mon Jun 5 16:42:55 EDT 2006


Thessaloniki  <vgkabs at egnatia.ee.auth.gr>
                                          

My backgroung is in engineering and learning. Since the nature of
neural computing is interdisciplinary I've found the
experiments and results by Shadmehr and Holcomb [1997]
appealing. In the following I am attempting to enhance the
question posed.

A system with explicit memory capacity is an interesting
approach to learning. But I am wondering: isn't it what we
really do when for many neurocomputing paradigms
(back-propagation etc.) the data are stored in the memory
and are fed again and again until a satisfactory level of
performance is achieved ? That is, the training set is
employed as it had been stored in the memory of the system.

It is true that the essence of learning is generalization.
As the cost of both memory and computing time is dropping,
it is all the more likely to see in the future systems with
on-board "memory capacity" with an enhanced learning
capacity.

Nevertheless learning and behavior will probably not improve
dramatically by only adding memory. This is because with
on-board memory we will simply be doing faster and more
efficiently what we are doing already. My proposition is
that, perhaps, brain-like learning behavior could be
simulated by changing the type of data we operate on. That
is, usually a learning example considers one type of data
and typically from the Euclidean space. Other types of data
have also been considered, such as propositional 
statements. But it is very likely that only some type of
hybrid information handling system could simulate the brain
convincingly.

However, when dealing with disparate data, a "problem" is
that such data are usually not handled with mathematical
consistency. Hence such issues as "convergence in the 
limit" are not meaningful. The practical advantage of
mathematically consistent hybrid-learning is that such
learning could lead to reliable learning models and
learning machines with an anticipated behavior. In this
context we have treated partial ordered sets, in particular
lattices, we defined a mathematical metric, and we have
obtained some remarkable learning results with various
bechmark data sets.

In conclusion it seems to us that a sophisticated learning
behavior is only in part a question of memory. It is
moreover a question of the type of data being processed.

------------------------------------------------------------


More information about the Connectionists mailing list