FW: Connectionist symbol processing: any progress?
Fry, Robert L.
FRYRL at f1groups.fsd.jhuapl.edu
Mon Aug 17 09:45:07 EDT 1998
On Sat, 15 Aug 1998, Mitsu Hadeishi wrote:
>
> Lev,
>
> Okay, so we agree on the following:
>
> Recurrent ANNs have the computational power required. The only
> thing at issue is the learning algortihm.
>
>
Lev Goldfarb responded:
>"The ONLY thing at issue" IS the MAIN thing at issue, because the
>simulation of the Turing machine is just a clever game, while an adequate
>model of inductive learning should, among many other things, change our
>understanding of what science is (see, for example, Alexander Bird,
>Philosophy of Science, McGill-Queen's University Press, 1998).
I have not seen this reference, but will certainly seek it. I
certainly agree with it and especially so in the context of
connectionists symbolic processing. Consider a computational paradigm
where a single- or multiple-neuron layer is viewed as as information
channel. It is different, however, from classical Shannon channels in
that the neuron transduces information (viz. transmission) from input
to an internal representation which is in turn used to select an
output code. In a conventional Shannon channel, a channel input code
is selected and then inserted into a channel which will degrade this
information relative to a receiver that seeks to observe it. That is,
one can distinguish between a communications system that effects the
transmission of information and a physical system that effects the
transuction of information. The engineering objective (as stated by
Shannon) was to maximize the entropy of the source and match this to
the channel capacity. Alternative, consider a neural computational
paradigm where the computational objective is to maximize the
information transduced and match this to the output entropy of the
neuron. That is, transduction and transmission are complementary
processes of information transfer. Information transfer from physical
system to physical system requires both.
It is interesting that Warren Weaver who co-authored the second
chapter in the classic 1949 book "Theory of Communication" recognized
this distinction and even made the following statement: "The word
communication will be used here in a very broad sense to include all
procedures by which one mind may effect another." This is a very
interesting choice of words.
Why is such a perspective important? Does it provide an unambiguous
way of defining learning, symbols, input space, output space,
computational objective/metrics, and an inductive theory of neural
computation? The neural net community is often at odds with itself
regarding having common bases of definitions and interpretations for
these terms. After all, regardless of the learning objective function
or error criterion, biological and artificial neurons, through
learning, must either modify what they measure, e.g., synaptic
efficiacies and possibly intradendritic delays, modify what signals
they generate for a given input, e.g., variations in firing threshold,
or a combination of these.
More information about the Connectionists
mailing list