Analog Content-addressable Memories (ACAM)

pollack@nmsu.csnet pollack at nmsu.csnet
Mon Apr 18 15:45:56 EDT 1988


I have thought about this issue as well, and, 
at one point tried to build a "noisy auto-
associative" network, where randomly perturbed 
input patterns were mapped back into the pure 
source. It didn't work too well, but it sounds 
like Nowlan has gotten something similar to 
work. One problem with using the normal form of 
back-prop is that the sigmoid function tends to 
be like BSB, pushing values to their extrema of 
0 and 1. 

Something which would be really nice would be a 
generative memory, in the following sense. 
Given a finite training basis of analog 
patterns, the resultant ACAM would have a 
theoretically infinite number of attractor 
states, which were in some sense "similar" to 
the training patterns. 

Its possible that this type of memory already 
exists, but was considered a failed experiment 
by a dynamical systems researcher. (Similarly, 
"slow glass" may already exist in the failed 
experiments of a chemist at Poloroid!) 

This type of ACAM would be nice, say, if one 
were storing analog patterns which represented, 
say, sentence meanings. The resultant memory 
might be able to represent a much larger set of 
meanings as stable patterns. 

I have, as yet, no idea how to do this.

Jordan



More information about the Connectionists mailing list