Distributed Representations
Bo Xu
ITGT500 at INDYCMS.BITNET
Mon Jun 24 10:45:52 EDT 1991
Ali Minai mentioned a good point on where the representations are considered.
Let's see his messages first:
>Speaking of which layers to apply the definition to, I think that in a
>feed-forward associative network (analog or binary), the hidden neurons
>(or all the weights) are the representational units. The input neurons
>merely distribute the prior part of the association, and the output neurons
>merely produce the posterior part. The latter are thus a "recovery mechanism"
>designed to "decode" the distributed representation of the hidden units and
>recover the "original" item. Of course, in a heteroassociative system, the
>"recovered original" is not the same as the "stored original". I realize that
>this is stretching the definition of "representation", but it seems quite
>natural to me.
I think according to the criterion of where representations exist, the
representations can be classified into two different types:
(1). External representations ---- The representations existed at the
interface layers (input and/or output layers). They are
responsible for the information transmission between the network
and the outside world (coding the input information at the input
layer and decoding the output information at the output layer).
(2). Internal representations ---- The representations existed at the
hidden layers. These representations are used to encode the
mappings from the input field to the output field. The mappings
are the core of the neural net.
If I understand correctly, Ali Minai is referring to the internal
representations only, and neglect the external representations. The internal
representations are very important representations. However, these
representations are determined by the topology of the network, and we cannot
change them unless we change the network topology. Most of the current
networks' topology ensure that the internal representations are mixed
distributed representations (as I pointed out several days ago). Their
working mechanisms are still a black-box.
Without changing the topology of the network, what we can choose and
select are the external representations only. They should not be neglected.
>Zillions of issues remain unaddressed by this formulation too, especially
>those of consistent measurement. I feel that each domain and situation
>will have to supply its own specifics.
>I am not sure I understand Bo Xu's assertion that analog representations
>are "more natural". Certainly, to approximate a parabola (which I have
>done hundreds of times with different neural nets) would imply using an
>analog representation, but it is not clear if that is so natural for
>classifying apples and pears. Using different analog values to indicate
>intra-class variations is reasonable and, under specific circumstances,
>might even be provably better than a binary representation. But I would
>be very hesitant to generalize over all possible circumstances. In any
>case, a global characterization of distributed representation should depend
>of specifics only for details, and should apply to both discrete and analog
>representations.
It's true that there will be zillions of issues in practical applications.
However, it's also due to this fact, it will be very difficult (if not
impossible) to study all these zillions issues first before drawing some
conclusions. Some generalizations based on limited studies are probably
necessary and of help when facing such a situation.
I want to thank Ali Minai for his comments. All of his comments are very
valuable and thought-stimulating.
Bo Xu
Indiana University
ITGT500 at INDYCMS.BITNET
More information about the Connectionists
mailing list