Distributed Representations

Ali Ahmad Minai aam9n at hagar2.acc.Virginia.EDU
Mon Jun 24 22:29:34 EDT 1991


This is in response to Bo Xu's last posting regarding distributed
representations. I think one of the problems is a basic incompatibility
in our notions of "representations" and where they exist. I would like to
clarify my earlier posting somewhat on this point.
 
I wrote:

>>Speaking of which layers to apply the definition to, I think that in a
>>feed-forward associative network (analog or binary), the hidden neurons
>>(or all the weights) are the representational units. The input neurons
>>merely distribute the prior part of the association, and the output neurons
>>merely produce the posterior part. The latter are thus a "recovery mechanism"
>>designed to "decode" the distributed representation of the hidden units and
>>recover the "original" item. Of course, in a heteroassociative system, the
>>"recovered original" is not the same as the "stored original". I realize that
>>this is stretching the definition of "representation", but it seems quite
>>natural to me.

To which Bo replied:
 
>I think according to the criterion of where representations exist, the
>representations can be classified into two different types:
> 
>(1). External representations ---- The representations existed at the
>         interface layers (input and/or output layers).  They are
>         responsible for the information transmission between the network
>         and the outside world (coding the input information at the input
>         layer and decoding the output information at the output layer).
> 
>(2). Internal representations ---- The representations existed at the
>         hidden layers.  These representations are used to encode the
>         mappings from the input field to the output field. The mappings
>         are the core of the neural net.
> 
>If I understand correctly, Ali Minai is referring to the internal
>representations only, and neglect the external representations.  The internal
>representations are very important representations.  However, these
>representations are determined by the topology of the network, and we cannot
>change them unless we change the network topology.  Most of the current
>networks' topology ensure that the internal representations are mixed
>distributed representations (as I pointed out several days ago).  Their
>working mechanisms are still a black-box.
> 
>Without changing the topology of the network, what we can choose and
>select are the external representations only.  They should not be neglected.

First, let me state what I meant by the "stored" and "recovered"
representations in the heteroassociative case. We can see the process
of the heteroassociation of an input vector U and output vector V in
a feed-forward network as a process of encoding a representation of
the vector UV over the hidden units of the network. This is what I call
"storage". There is a special requirement here that, given U, a
mechanism should be able to produce V over the output units, thus
"completing the pattern". The process of doing this is what I call
"recovery" (or "recall"). The way I see it (and I believe most other
connectionists too) is that the representational part of the network
consists of its "internals" --- either the weights, or the hidden units.
Far from being uncontrollable, as Bo Xu states, these are *precisely* the
things that we *do* control --- not in a micro sense, but through complex
global schemes such as training algorithms. The prior to be stored, which
Bo takes to be the representation, is, to me, just a given that has been
through some unspecified preprocessing. It is the "object" to be
represented (though I agree that all objects are themselves representations).


More information about the Connectionists mailing list