Connectionists: Why is the neural-to-concept mapping issue being ignored (still)?

Richard Loosemore rloosemore at susaro.com
Sat Nov 10 01:00:36 EST 2018


All,

I have bit of a bone to pick, if you don't mind.

Why is the question of how concepts relate to neurons so scandalously
incoherent, even after this field has been talking about it for at least
35 years?

In particular, why do so many papers - multiple thousands, by now - in
the neuroscience/connectionist nexus make the naive assumption that a
number of neurons (one neuron, a cluster, a distributed cluster, a
sparse group) will correspond to one concept?  So much so that when
those neurons light up in an fMRI, or are active electrically, people
declare that the concept must be 'there'?

We have known for a long time that such a naive correspondence has
serious problems.  (I say 'long time' because I remember discussing the
problems with John Taylor in 1981, and certainly Donald Norman raised
them in his chapter at the end of the two PDP volumes)  For example,
what happens when I create a brand new concept and name it?  Let's say I
decide to give the name "skandulupper" to the concept of "watching all
of Tuesday Weld's movies in one week":  does the human brain suddenly
manage to find a single neuron or a cluster that just happen to have all
the right connections to all the phoneme neurons for that word, and to
the concept neurons for "Tuesday Weld", and "movie" and "week"? 

(Notice that if the new concept is only hanging around in transient
(working memory) storage, with permanent storage in long term memory
being assigned during some kind of overnight WM->LTM consolidation, that
really only postpones the awkward questions about how the right neurons,
with the right connections, are located.)

And if the answer to that awkward question about new concepts is "Duh! 
Distributed representations, of course!", then what do we do about the
next awkward question?  You know the one:  if there is a distributed
representation for a concept like, say "cook", then how does the system
represent a sentence like "Cook was a good cook, as cooks go; but as
good cooks go, Cook went."?

The problem is, that there has never been a good answer to these issues,
so what the community seems to have done instead is to write a thousand
papers describing various neuroscience findings AS IF the
representations are pretty much localist, or sparse, or even
grandmotherly.  Why is it acceptable to publish papers as if a KNOWN
FAILED MODEL is the one that is assumed to be valid?  The whole
community seems to be suffering from a collective delusion.

Let's try to be absolutely clear, here.  Localist, semi-localist,
sparse, and distributed representations do not work as accounts of
actual cognition, unless they are used in "toy" models that do nothing
more than a fantasy version of what we know the real brain does.  Let's
not pretend that just because a lot of people play with toy models, and
a lot of people pretend that the toy models mean something, that somehow
that changes the reality.  This is especially true of "reinforcement
learning" models that claim to show that the brain does RL -- a close
examination of these claims hinge on a sudden bait and switch from real
neural wiring to a toy model applied to trivial data, which could not
possibly scale up.

Now, it so happens that there really is a way to imagine a solution to
this problem, at the theoretical level.  One simply has to accept that
the neural machinery is set up in such a way that the structures
corresponding to concepts are neither localist nor distributed, but
virtual.  That means that concepts are to the neural hardware what
programs in a computer network system are to the physical computers --
concepts are allowed to have two states (active and dormant) and when
active the concepts have most of the properties of an old-school symbol
processing system.  Any version of this virtual-concept idea would solve
the problems inherent in assuming that neurons correspond to concepts.

Notice that we do not have to produce a fully-functional simulation of
such a virtual-concept system, for it to be a valid competitor to
localist, semi-localist, sparse, and distributed representations.  The
latter are known failures, and "simulations" of them are invariably toy
models that cannot scale up.  In that context, it is enough to point out
that the basic properties of a virtual-concept type of model are already
superior to the idea that representations are localist, semi-localist,
sparse, or distributed.  Calling something a "virtual-concept" system is
almost the same as saying that it is a variety of symbolic system ...
and there were many dozens of symbolic systems in the 1970s, 80s and 90s
that were fully implemented, and that clearly did not suffer the
problems shown by localist, semi-localist, sparse, and distributed
neural representations.

--

I have a personal reason for raising this question.

Trevor Harley and I tried to raise this question in a paper we wrote in
2010 [1].  Long story short, we said "Look, any kind of virtual-concept
system is the NEXT SIMPLEST theoretical construct up from the two known
failed constructs (localist and distributed), so let's see what would
happen if some version of that virtual-concept idea turns out to be the
way things are."  In particular, we asked two questions:  (1) Would the
arguments and conclusions in a selection of popular
brain-imaging/neuroscience papers actually make any sense, if the brain
really was using this next-best type of system?, and (2) Would the
virtual-concept idea give a better account of any of these published
neuroscience results?

We expanded a little on what the virtual-concept idea might actually
mean, so our readers would have a concrete feel for the overall
implications.  Then we analysed the chosen set of papers.  Our
conclusions were that most of the arguments in the papers fell to pieces
if virtual concepts were what the brain was doing, and that the
virtual-concept idea gave a better account of many of the published results.

And then this happened.  Our paper was a book chapter, and in the same
book William Bechtel and Richard C. Richardson decided to go on the
offensive and trash every claim we tried to make.  With language like:

"Loosemore and Harley [...] evidently do not think of this as a serious
model of cognition, but as a kind of toy structure [...]  The model is
presented very sketchily, with no empirical backing and no substantive
constraints. Loosemore and Harley [...] offer no detailed results to
demonstrate that the model could accommodate even the most accepted
empirical results about recognition or control. [...] This, however, is
a cartoon of serious science [...]  If Loosemore and Harley were right,
then evolutionary biologists would need consistently to defend
themselves against Creationist contentions"

Words cannot express my feelings, here.  What kind of science is this,
when someone is compared to a Creationist for discussing a model that,
quite simply, is the only kind of model that has any hope of getting out
of the known problems of the toy models that are currently accepted as
the norm?


Richard Loosemore.




[1]  Loosemore, R.P.W. & Harley, T.A. (2010). Brains and Minds:  On the
Usefulness of Localization Data to Cognitive Psychology. In M. Bunzl &
S.J. Hanson (Eds.), Foundational Issues in Human Brain Mapping.
Cambridge, MA: MIT Press.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20181110/d3e082a4/attachment.html>


More information about the Connectionists mailing list