ART 1

Barbara K. Moore Bryant barb at ai.mit.edu
Tue Jan 25 18:21:48 EST 1994


I have a paper about ART 1 and pattern clustering and would be happy
to send it to you if you give me your mailing address.  Or, you can
look it up in

Moore, "ART 1 and Pattern Clustering," Proceedings of the 1988
Connectionist Summer School, Morgan-Kaufman publ., pp. 174-185.

I'd love to hear what you think.  In the paper I show that ART 1 does
in fact implement the leader clustering algorithm.  ART 1's
"stability" and "plasticity" are a property of the clustering
algorithm (and the fact that only binary strings are the input and
stored pattern), not of the underlying "neural" components.  A careful
reading of the paper and perusal of the examples might suggest that a
different choice of distance metric or clustering algorithm might make
more sense in a particular application.  In fact, the final clusters
formed by ART 1 might be described by some as downright weird (see
Fig. 6 in my paper).  I show by an example that other choices can be
implemented in a similar architecture: there is no algorithmic
constraint embodied in the architectural components of ART 1.

About stability:
   ART 1 is stable because stored binary patterns can only be changed in 
one "direction" (you can change 1's to 0's but not 0's to 1's).  So 
you will never get a situation where a pattern cycles.  Moreover, no two 
stored patterns can be the same in ART 1, so you can only have finitely
many stored patterns (because they're binary), and after some number
of presentations of the same training set, the patterns will be fixed.
Note that ART 1 would *not* be stable for real-valued inputs!

   As with any incremental clustering algorithm, different orders of
presentation of input vectors to ART 1 during learning can result in
different clusters.

   It is not necessarily bad or a problem that ART 1 implements the 
leader clustering algorithm.  It would be nice, however, if this were
made clear by the architects in the somewhat complicated papers that
have been written on the subject.  

   It might actually be very interesting that such architectures can
implement clustering algorithms.  It might be interesting to see what
happens when you relax the constraint that all the underlying
dynamical systems reach equilibrium before presenting the next
training input.  A cleverly designed architecture might behave in a
useful way, or a biological way.

(Note: I am not the only one to have made these observations about ART
1, but the presentation in my paper is the clearest that I know of.
The paper is written so that it can be understood by people who aren't
familiar with clustering.)

barb at ai.mit.edu

Please cc me on responses.


More information about the Connectionists mailing list