Connectionists: Why is the neural-to-concept mapping issue being ignored (still)?

Asim Roy ASIM.ROY at asu.edu
Sun Nov 11 02:17:55 EST 2018


Dear Richard,

Frontiers recently published an e-book on “Representation in the Brain.” Here’s the link:
https://www.frontiersin.org/research-topics/4398/representation-in-the-brain

Eleven more articles added to the thousands you mention. Some of the papers argue for localist representation, while others argue for distributed representation.

I would argue that localist representation is synonymous with symbolic systems. And localist representation is widely used in the brain. My paper in the e-book is about the brain being a purely abstract system at the single cell (neuron) level and there is plenty of neurophysiological evidence for that from single cell studies.  And an abstract system is synonymous with symbolic systems. Here’s the link to the paper:
The Theory of Localist Representation and of a Purely Abstract Cognitive System: The Evidence from Cortical Columns, Category Cells, and Multisensory Neurons<https://www.frontiersin.org/articles/10.3389/fpsyg.2017.00186/full>

Here are some related papers:
A theory of the brain: localist representation is used widely in the brain<https://www.frontiersin.org/articles/10.3389/fpsyg.2012.00551/full>
An extension of the localist representation theory: grandmother cells are also widely used in the brain<https://www.frontiersin.org/articles/10.3389/fpsyg.2013.00300/full>

Among the neuroscientists, Horace Barlow, great grandson of Darwin (https://en.wikipedia.org/wiki/Horace_Barlow), is perhaps the only one who has argued for grandmother cells in the brain.

We have had discussions on these issues in small groups at various times over the last few years. And Barlow was in one of those discussion groups. If there is interest, we can again form a small discussion group to discuss the issues you have raised. I think there is much evidence from single cell studies to claim that the brain is a symbolic system. So, as I see it, there is perhaps some convergence on this issue.

All the best,
Asim Roy
Professor, Information Systems
Arizona State University
www.lifeboat.com/ex/bios.asim.roy<http://www.lifeboat.com/ex/bios.asim.roy>



From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> On Behalf Of Richard Loosemore
Sent: Friday, November 09, 2018 11:01 PM
To: connectionists at mailman.srv.cs.cmu.edu
Subject: Connectionists: Why is the neural-to-concept mapping issue being ignored (still)?


All,

I have bit of a bone to pick, if you don't mind.

Why is the question of how concepts relate to neurons so scandalously incoherent, even after this field has been talking about it for at least 35 years?

In particular, why do so many papers - multiple thousands, by now - in the neuroscience/connectionist nexus make the naive assumption that a number of neurons (one neuron, a cluster, a distributed cluster, a sparse group) will correspond to one concept?  So much so that when those neurons light up in an fMRI, or are active electrically, people declare that the concept must be 'there'?

We have known for a long time that such a naive correspondence has serious problems.  (I say 'long time' because I remember discussing the problems with John Taylor in 1981, and certainly Donald Norman raised them in his chapter at the end of the two PDP volumes)  For example, what happens when I create a brand new concept and name it?  Let's say I decide to give the name "skandulupper" to the concept of "watching all of Tuesday Weld's movies in one week":  does the human brain suddenly manage to find a single neuron or a cluster that just happen to have all the right connections to all the phoneme neurons for that word, and to the concept neurons for "Tuesday Weld", and "movie" and "week"?

(Notice that if the new concept is only hanging around in transient (working memory) storage, with permanent storage in long term memory being assigned during some kind of overnight WM->LTM consolidation, that really only postpones the awkward questions about how the right neurons, with the right connections, are located.)

And if the answer to that awkward question about new concepts is "Duh!  Distributed representations, of course!", then what do we do about the next awkward question?  You know the one:  if there is a distributed representation for a concept like, say "cook", then how does the system represent a sentence like "Cook was a good cook, as cooks go; but as good cooks go, Cook went."?

The problem is, that there has never been a good answer to these issues, so what the community seems to have done instead is to write a thousand papers describing various neuroscience findings AS IF the representations are pretty much localist, or sparse, or even grandmotherly.  Why is it acceptable to publish papers as if a KNOWN FAILED MODEL is the one that is assumed to be valid?  The whole community seems to be suffering from a collective delusion.

Let's try to be absolutely clear, here.  Localist, semi-localist, sparse, and distributed representations do not work as accounts of actual cognition, unless they are used in "toy" models that do nothing more than a fantasy version of what we know the real brain does.  Let's not pretend that just because a lot of people play with toy models, and a lot of people pretend that the toy models mean something, that somehow that changes the reality.  This is especially true of "reinforcement learning" models that claim to show that the brain does RL -- a close examination of these claims hinge on a sudden bait and switch from real neural wiring to a toy model applied to trivial data, which could not possibly scale up.

Now, it so happens that there really is a way to imagine a solution to this problem, at the theoretical level.  One simply has to accept that the neural machinery is set up in such a way that the structures corresponding to concepts are neither localist nor distributed, but virtual.  That means that concepts are to the neural hardware what programs in a computer network system are to the physical computers -- concepts are allowed to have two states (active and dormant) and when active the concepts have most of the properties of an old-school symbol processing system.  Any version of this virtual-concept idea would solve the problems inherent in assuming that neurons correspond to concepts.

Notice that we do not have to produce a fully-functional simulation of such a virtual-concept system, for it to be a valid competitor to localist, semi-localist, sparse, and distributed representations.  The latter are known failures, and "simulations" of them are invariably toy models that cannot scale up.  In that context, it is enough to point out that the basic properties of a virtual-concept type of model are already superior to the idea that representations are localist, semi-localist, sparse, or distributed.  Calling something a "virtual-concept" system is almost the same as saying that it is a variety of symbolic system ... and there were many dozens of symbolic systems in the 1970s, 80s and 90s that were fully implemented, and that clearly did not suffer the problems shown by localist, semi-localist, sparse, and distributed neural representations.

--

I have a personal reason for raising this question.

Trevor Harley and I tried to raise this question in a paper we wrote in 2010 [1].  Long story short, we said "Look, any kind of virtual-concept system is the NEXT SIMPLEST theoretical construct up from the two known failed constructs (localist and distributed), so let's see what would happen if some version of that virtual-concept idea turns out to be the way things are."  In particular, we asked two questions:  (1) Would the arguments and conclusions in a selection of popular brain-imaging/neuroscience papers actually make any sense, if the brain really was using this next-best type of system?, and (2) Would the virtual-concept idea give a better account of any of these published neuroscience results?

We expanded a little on what the virtual-concept idea might actually mean, so our readers would have a concrete feel for the overall implications.  Then we analysed the chosen set of papers.  Our conclusions were that most of the arguments in the papers fell to pieces if virtual concepts were what the brain was doing, and that the virtual-concept idea gave a better account of many of the published results.

And then this happened.  Our paper was a book chapter, and in the same book William Bechtel and Richard C. Richardson decided to go on the offensive and trash every claim we tried to make.  With language like:

"Loosemore and Harley [...] evidently do not think of this as a serious model of cognition, but as a kind of toy structure [...]  The model is presented very sketchily, with no empirical backing and no substantive constraints. Loosemore and Harley [...] offer no detailed results to demonstrate that the model could accommodate even the most accepted empirical results about recognition or control. [...] This, however, is a cartoon of serious science [...]  If Loosemore and Harley were right, then evolutionary biologists would need consistently to defend themselves against Creationist contentions"

Words cannot express my feelings, here.  What kind of science is this, when someone is compared to a Creationist for discussing a model that, quite simply, is the only kind of model that has any hope of getting out of the known problems of the toy models that are currently accepted as the norm?


Richard Loosemore.




[1]  Loosemore, R.P.W. & Harley, T.A. (2010). Brains and Minds:  On the Usefulness of Localization Data to Cognitive Psychology. In M. Bunzl & S.J. Hanson (Eds.), Foundational Issues in Human Brain Mapping. Cambridge, MA: MIT Press.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20181111/63542979/attachment.html>


More information about the Connectionists mailing list