Dynamic Binding
janetw@cs.uq.oz.au
janetw at cs.uq.oz.au
Tue Nov 9 20:25:30 EST 1993
On binding using tensors:
A second approach to the binding problem (which also differs from the
phase approach) is the use of tensors or their compressed representations
in convolution/correlation models. This approach has been used since the
early 1980s for modeling temporal and contextual bindings in human memory.
When dealing with several variables, it can help to think of HU space
as a compressed representation of their tensor product (see Method 2 below).
The terminology differs across disciplines, which makes it harder to find
appropriate connections. The following are some that I have come across:
Tensors:
Smolensky (papers go back at least to 1984 with Riley; see 1990, Artificial
Intelligence);
Pike (1984, Psych Review) A comparison of convolution and matrix distributed
memory systems;
Humphreys, Bain and Pike (1989, Psych Review) - called 3D matrices; showed
the use of a tensor for binding context, cue and target in memory storage,
and how to access both semantic and episodic information from the memory.
Sloman and Rumelhart had a memory model that looked like a feedforward net
with sigma-pi units which was essentially the same underlying maths with
respect to the binding aspects (date?, ed. Healey, The Estes volume);
Halford et al use tensors as the mapping process in analogical reasoning
tasks, specifically looking at the limits of human capacity for processing
interacting dimensions (1993, eds Holyoak and Barnden, Advances in
Connectionist and Neural Computational Theory Vol 2);
Wiles et al (1992, Univ of Qld TR218) reviews the models of Halford et al
and Humphreys et al and shows the link between them.
Convolution/correlation models:
Murdock (1982, Psych Review)
Eich (1982, 1985, Psych review)
Plate (1991, IJCAI, and 1992 & 93 NIPS) Holographic reduced representations.
NB. There are differences in the use of tensors. Eg. to encode a predicate
F(x,y,z), where x, y and z are vectors:
Method 1: Smolensky would create roles for each variable in the triple,
r1, r2, r3, and then create a representation, T, of the triple as
T = r1*x + r2*y + r3*z
where * is the tensor (or outer) product operation and + is the usual
addition of vectors (also called linear superposition). A composite memory
would require a vector to specify each item, eg i1, and then superimpose
all such representations, ie,
M = i1*(r1*x1 + r2*y1 + r3*z1) + i2*(r1*x2 + r2*y2 + r3*z2) + ...
Method 1 allows binding of arbitrary sized tuples using a tensor, M, of
rank 3, but does not represent the interactions between variables. It seems
plausible that phase locking would be one way of implementing method 1.
Method 2: In the approach of Humphreys et al and Halford et al, the tensor
would be the outer product of all three variables (like a 3D matrix), ie
T = x*y*z
The memory would be formed by superimposing all such tensor representations,
M = x1*y1*z1 + x2*y2*z2 + x3*y3*z3 + x4*y4*z4 + ...
Method 2 does not require a unique vector for each item, nor role vectors, and
the interactions between variables are accessible. But, there are practical
limits to the size of the tensor - Halford estimates that humans can
process up to 4 independent variables in parallel - which he models as a tensor
of rank 5.
In the memory work of Humphreys et al, tensors of rank 3 are used (context,
cue and target). If a context is not available, then a unit vector is
substituted, effectively accessing the average of all the other items
(a "semantic" memory). This allows both context sensitive and context-
insensitive access processes over the same information.
------------------------
Janet Wiles
Departments of Computer Science and Psychology
University of Queensland
QLD 4072 AUSTRALIA
email: janetw at cs.uq.oz.au
More information about the Connectionists
mailing list