No subject
Tue Jun 6 06:52:25 EDT 2006
"How can a neuron maximize the information it can transduce and match
this to its ability to transmit?" Posing this question leads [1] to
an unambiguous definition of learning, symbols, input space, output
space based solely on logic, and consequently, probability theory and
entropy. Furthermore, it provides an unambiguous computational
objective and leads to a neuron or system of neurons that operate
inductively. The resulting neural structure is the Hopfield neuron
that regarding optimized transduction, obeys a modified form of Oja's
equation for spatial adaptation, performs intradendritic channel delay
equilibration of inputs for temporal adaptation. For optimized
transmission, the model calls for a subtle adaptation of the firing
thresold to optimize its transmission rate.
This is not to say that the described model of neural computation is
"correct." Correct is in the eyes of the beholder and depends on the
application or theoretical goal pursued. However, this example does
point out that there are common aspects to all neural computational
problems and paradigms that in fact lend themselves to more precise
definitions of terms like "learning." These definitions arise more
naturally when the perspective of the neuron is taken. That is, it
observes its inputs (regardless of how we might represent them),
perhaps observes its own outputs, and using all that it can observe,
executes a computational strategy that effects learning in such a
manner as to optimize its defined error formulation, formalized
objective criterion, of information-theoretic measure. Any adaptive
rule will lead to either (1) a different way of extracting information
from its inputs, (2) a different way of generating outputs given the
information that has been measured, or (3) both of these. I think
that it is a worthwhile goal to try to pursue more rigorous
neuron-centric views of the terms used within the neural network
community if for no other reason than to better focus exhanges and
debates between members of the community.
Bob Fry
[1] "A logical basis for neural network design," in Vol. 3, Techniques
and Applications of Artificial Neural Networks, Academic Press 1998.
More information about the Connectionists
mailing list