splitting hairs

Gary Cottrell gary%cs at ucsd.edu
Wed Oct 31 11:49:16 EST 1990


Since Steve brought up hair splitting, it seemed like a good
time to send out my latest:

                                     SEMINAR

                   Approaches to the Inverse Dogmatics Problem:
                     Time for a return to localist networks?

                               Garrison W. Cottrell
                            Department of Dog Science
               Condominium Community College of Southern California


          The innovative use of neural networks in  the  field  of  Dognitive
     Science  has  spurred  the  intense  interest  of  the  philosophers  of
     Dognitive Science, the Dogmatists.  The field of Dogmatics is devoted to
     making  sense  of  the  effect  of  neural  networks  on  the conceptual
     underpinnings of  Dognitive  Science.   Unfortunately,  this  flurry  of
     effort  has  caused  researchers  in the rest of the fields of Dognitive
     Science to spend an inordinate amount of time attempting to  make  sense
     of  the  philosophers,  otherwise known as the Inverse Dogmatics problem
     (Jordan, 1990).  The problem seems to  be  that  the  philosophers  have
     allowed  themselves an excess of degrees of freedom in conceptual space,
     as it were, leaving the rest of us with an underconstrained optimization
     problem:  Should we bother listening to these folks, who may be somewhat
     more interesting than old Star Trek reruns, or should we try and get our
     work done?

          The inverse dogmatics problem has become  so  prevalent  that  many
     philosophers  are having to explain themselves daily, much to the dismay
     of the rest of the field.  For example Gonad[1]  (1990a,  1990b,  1990c,
     1990d, 1990e, well, you get the idea...) has repeatedly stated  that  no
     connectionist network can pass his usually Fatal Furring Fest, where the
     model is picked apart, hair by hair[2],  until  the  researchers  making
     counterarguments have long since died[3].  One approach to this  problem
     is to generate a connectionist network that is so hairy (e.g., Pollack's
     RAMS, 1990), that it will outlast Gonad's  attempt  to  pick  it  apart.
     This  is  done  by  making  a  model  that is at the sub-fur level, that
     recursively splits hairs, RAMming more and more into  each  hair,  which
     generates  a  fractal  representation  that is not susceptible to linear
     hair splitting arguments.

          Another approach is to take Gonad head-on, and try  to  answer  his
     fundamental  question,  that  is,  the  problem of how external discrete
     nuggets get mapped into internal mush.  This is known  as  the *grinding
     problem*.  In  our  approach  to  the  grinding  problem,  we extend our
     previous work on the Dog Tomatogastric Ganglion (TGG).  The  TGG  is  an
     oscillating  circuit  in the dog's motor cortex that controls muscles in
     the dog's stomach that expel tomatoes and other non-dogfood  items  from
     the  dog's stomach.  In our grinding network, we will have a similar set
     up, using recurrent bark propagation to train the network  to  oscillate
     in  such  a  way  that muscles in the dog's mouth will grind the nuggets
     ____________________
        [1]Some suspect that Gonad may in fact be  an  agent  of  reactionary
     forces whose mission is to destroy Dognitive Science by filibuster.
        [2]Thus by a simple morphophonological process of reduplication,  ex-
     haustive arguments have been replaced by exhausting arguments.
        [3]In  this  respect,  Gonad's  approach  resembles that of Pinky and
     Prince, whose exhausting treatment of the Past Fence Model,  Rumblephart
     and McNugget's connectionist model of dog escapism, has generated a sub-
     field of Dognitive Science composed of people trying to answer their ar-
     guments.






     into the appropriate internal representation.   This  representation  is
     completely  distributed.   This  is  then  transferred directly into the
     dog's  head,  or  Mush  Room.   Thus   the   thinking   done   by   this
     representation,  like  most  modern  distributed representations, is not
     Bayesian, but Hazyian.

          If Gonad is not satisfied by this model,  we  have  an  alternative
     approach  to  this  problem.  We have come up with a connectionist model
     that has a *finite* number of things that can be said about it. In order
     to do this we had to revert to a localist model, suggesting there may be
     some use for them after all.  We will  propose  that  all  connectionist
     researchers boycott distributed models until the wave of interest by the
     philosophers passes.  Then we may get back to doing  science.   Thus  we
     must  bring  out some strong arguments in favor of localist models.  The
     first is that they are much more biologically plausible than distributed
     models,  since *just like real neurons*,  the units  themselves are much
     more complicated than those used in simple PDP nets.  Second,  just  like
     the neuroscientists do with horseradish peroxidase, we can label the units
     in our network, a major advantage being that we have  many  more  labels
     than  the neuroscientists have, so we can keep ahead of them.  Third, we
     don't have to learn any more than we did in AI 101, because we  can  use
     all of the same representations.

          As an example of the kind of model we think researchers should turn
     their attention to, we are proposing the logical successor to Anderson &
     Bower's HAM model, SPAM, for SPreading Activation Memory model.  In this
     model, nodes represent language of thought propositions.  Because we are
     doing Dog Modeling, we can restrict ourselves to  at  most  5  primitive
     ACTS:  eat,  sleep,  fight,  play,  make whoopee.  The dog's sequence of
     daily activities  can  then  be  simply  modeled  by  connectivity  that
     sequences   through  these  units,  with  habituation  causing  sequence
     transitions.  A fundamental problem here is, if the dog's brain  can  be
     modeled  by  5  units, *what is the rest of the dog's brain doing?* Some
     have posited that localist networks need multiple copies of every neuron
     for   reliability   purposes,   since  if  the  make  whoopee  unit  was
     traumatized, the dog would no longer be  able  to  make  whoopee.   Thus
     these researchers would posit that the rest of the dog's brain is simply
     made up of copies of these five neurons.  However, we believe we have  a
     more  esthetically pleasing solution to this problem that simultaneously
     solves the size mismatch  problem.   The  problem  is  that  distributed
     connectionists,  when  discussing  the  reliability  problem of localist
     networks, have in mind the wimpy little neurons that distributed  models
     use.   We  predict  that  Dognitive  neuroscientists, when they actually
     look, will find only five neurons in the dog's brain - but they will  be
     *really big* neurons.


More information about the Connectionists mailing list