Connectionists: neurons, grandmothers, symbols and pi.

Weng, Juyang weng at msu.edu
Thu Feb 22 17:56:26 EST 2024


    Somebody wrote, "Deeply disappointing that someone would try to inject actual empirical evidence into this discussion."
    I agree.  If we are unclear about the mathematical definition, we waste time here.
    When I said "There is no neuron (or a set of neurons) that has a one-to-one correspondence with a symbol", I meant that
without "one-to-one correspondence" definition, the neuron is NOT a symbol neuron you have in mind.
     From Wikipedia:
      A bijection, bijective function, or one-to-one correspondence between two mathematical sets is a function such that each element of the second set (the codomain) is mapped to from exactly one element of the first set (the domain). Equivalently, a bijection is a relation between two sets such that each element of either set is paired with exactly one element of the other set.
    In other words, one-to-one correspondence is "one-to-one" and "onto".
    If we are not careful, we slip into something like "grandmother cells" without a definition.  Because there are no symbol neurons,
there must be no grandmother cells in the brain.
    Why?  All neurons in the brain detect a spatiotemporal feature of the sensory space and the motor space.
    If you have read my book, NAI, you understand what this means and why it must be the case because the brain must not have a government inside the skull.
    Best regards,
-John


________________________________
From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> on behalf of Gary Marcus <gary.marcus at nyu.edu>
Sent: Thursday, February 22, 2024 12:44 PM
To: Jeffrey Bowers <J.Bowers at bristol.ac.uk>
Cc: connectionists at mailman.srv.cs.cmu.edu <connectionists at mailman.srv.cs.cmu.edu>
Subject: Connectionists: neurons, grandmothers, symbols and pi.

Generally agree with Jeff, but would add this to the general discussion:

We need to be careful about what the question we are asking is here. Mathematicians for example represent and manipulate 𝝅, both symbolically and (with some precision) numerically. We don’t know how the brain instantiates either of those representations. But it’s absurd to presume that we since don’t understand how either of those representations are instantiated neurally, that there is no symbolic or numeric representation thereof.

Physically, the symbolic representation of pi presumably occupies the labors of multiple neurons, but in some contexts those neurons may work together as a single logical whole, and it is possible that some of the neurons involved reach threshold only when pi is being processed.

Gary

On Feb 22, 2024, at 07:32, Jeffrey Bowers <J.Bowers at bristol.ac.uk> wrote:



Good point, I should not have used simple cells as an example of grandmother cells.  In fact, I agree that some sort of population coding is likely supporting our perception of orientation.  For example, simple cells are oriented in steps of about 5 degrees, but we can perceive orientations at a much finer granularity, so it must be a combination of cells driving our perception.



The other reason I should have not used simple cells is that grandmother cells are a theory about how we identify familiar categories of objects (my grandmother, or a dog or a cat).  Orientation is a continuous dimension where distributed coding may be more suitable.  The better example I gave is the word representation DOG in the IA model.  The fact that the DOG detector is partly activated by the input CAT does not falsify the hypothesis that DOG is locally coded. Indeed, it has hand-wired to be localist.  In the same way, the fact that a Jennifer Aniston neuron might be weakly activated by another face does not rule out the hypothesis that the neuron selectively codes for Jennifer Aniston.  I agree it is not strong evidence for a grandmother cell – there may be other images that drive the neuron even more, we just don’t know given the limited number of images presented to the patient.  But it is interesting that there are various demonstrations that artificial networks learn grandmother cells under some conditions – when you can test the model on all the familiar categories it has seen.  So, I would not rule out grandmother cells out of hand.



Jeff



From: KENTRIDGE, ROBERT W. <robert.kentridge at durham.ac.uk>
Date: Wednesday, 21 February 2024 at 20:56
To: Jeffrey Bowers <J.Bowers at bristol.ac.uk>, Gary Marcus <gary.marcus at nyu.edu>, Laurent Mertens <laurent.mertens at kuleuven.be>
Cc: connectionists at mailman.srv.cs.cmu.edu <connectionists at mailman.srv.cs.cmu.edu>
Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI

Again, it is great to be examining the relationship between ‘real’ neural coding and the ins and outs of representation in ANNs. I’m really pleased to be able to make a few contributions to a list which I’ve lurked on since the late 1980s!



I feel I should add an alternative interpretation of orientation coding in primary visual cortex to that so clearly explained by Jeffrey. It is, indeed, tempting to think of orientation tuned cells as labelled lines or grandmother cells where we read off activity in individual cells as conveying the presence of a line segment with a specific orientation at a particular location in the visual field. As neuroscientists we can certainly do this. The key question is whether brain areas outside primary visual cortex, which are consumers of information coded in primary visual cortex, also do this. The alternative view of orientation coding is that orientation is represented by a population code where orientation is represented as the vector sum of orientation preferences in cells with many different orientation tunings, weighted by their levels of activity, and that it is this population code that is read by areas that are consumers of orientation information. The notion of neural population coding of orientation was first tested electrophysiologically by Georgopoulos in 1982, examining population coding of the direction of arm movements in primary motor cortex. There is more recent psychophysical evidence that people’s confidence in their judgements of the orientation of a visual stimulus can be predicted on the basis of a population coding scheme (Bays, 2016, A signature of neural coding at human perceptual limits. Journal of Vision, https://jov.arvojournals.org/article.aspx?articleid=2552242<https://urldefense.proofpoint.com/v2/url?u=https-3A__jov.arvojournals.org_article.aspx-3Farticleid-3D2552242&d=DwMGaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=sG1Xdlih_sVqjQeOguhFEwtXavYdLzHaKPGgErxcEHlfYl0C7BnglI8bTPl6UNvA&s=aSyy_rYvvPjfLucV1td7Y5O7JtPHcZdQmylfTXWPL4c&e=>), where a person’s judgment is indicative of the state of a high level consumer of orientation information.



So again, I’d err on the side of suggesting that although we can conceive of single neurons in primary visual cortex as encoding information (maybe not really symbols in this case anyway), it isn’t our ability to interpret things like this that matters, rather, it is the way the rest of the brain interprets information delivered by primary visual cortex.



cheers,



Bob





<image001.jpg>

<image002.png>

<image003.png>

<image004.jpg>

Professor of Psychology, University of Durham.

Durham PaleoPsychology Group.

Durham Centre for Vision and Visual Cognition.

Durham Centre for Visual Arts and Culture.



<image005.jpg>

Fellow.

Canadian Institute for Advanced Research,

Brain, Mind & Consciousness Programme.







Department of Psychology,

University of Durham,

Durham DH1 3LE, UK.



p: +44 191 334 3261

f: +44 191 334 3434











From: Jeffrey Bowers <J.Bowers at bristol.ac.uk>
Date: Wednesday, 21 February 2024 at 12:31
To: KENTRIDGE, ROBERT W. <robert.kentridge at durham.ac.uk>, Gary Marcus <gary.marcus at nyu.edu>, Laurent Mertens <laurent.mertens at kuleuven.be>
Cc: connectionists at mailman.srv.cs.cmu.edu <connectionists at mailman.srv.cs.cmu.edu>
Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI

[EXTERNAL EMAIL]

It is possible to define a grandmother cell in a way that falsifies them.  For instance, defining grandmother cells as single neurons that only *respond* to inputs from one category.  Another definition that is more plausible is single neurons that only *represent* one category.  In psychology there are “localist” models that have single units that represent one category (e.g., there is a unit in the Interactive Activation Model that codes for the word DOG).  And a feature of localist codes is that they are partly activated by similar inputs. So a DOG detector is partly activated by the input HOG by virtue of sharing two letters.  But that partial activation of the DOG unit from HOG is no evidence against a localist or grandmother cell representation of the word DOG in the IA model.  Just as a simple cell of a vertical line is partly activated by a line 5 degrees off vertical – that does not undermine the hypothesis that the simple cell *represents* vertical lines.   I talk about the plausibility of Grandmother cells and discuss the Aniston cells in a paper I wrote sometime back:



Bowers, J. S. (2009). On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience. Psychological review, 116(1), 220.





From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> on behalf of KENTRIDGE, ROBERT W. <robert.kentridge at durham.ac.uk>
Date: Wednesday, 21 February 2024 at 11:48
To: Gary Marcus <gary.marcus at nyu.edu>, Laurent Mertens <laurent.mertens at kuleuven.be>
Cc: connectionists at mailman.srv.cs.cmu.edu <connectionists at mailman.srv.cs.cmu.edu>
Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI

I agree – empirical evidence is just what we need in this super-interesting discussion.



I should point out a few things about the Quiroga et al 2005 ‘Jennifer Aniston cell’ finding (Nature, 435. 1102 - 1107 ).



Quiroga et al themselves are at pains to point out that whilst the cells they found responded to a wide variety of depictions of specific individuals they were not ‘Grandmother cells’ as defined by Jerry Lettvin – that is, specific cells that respond to a broad range of depictions of an individual and *only* of that individual, meaning that one can infer that this individual is being perceived, thought of, etc. whenever that cell is active.



The cells Quiroga found do, indeed, respond to remarkably diverse ranges of stimuli depicting individuals, including not just photos in different poses, at different ages, in different costumes (including Hale Berry as Catwoman for the Hale Berry cell), but also names presented as text (e.g. ‘HALE BERRY’). Quiroga et al only presented stimuli representing a relatively small range of individuals and so it is unsafe to conclude that the cells they found respond *only* to the specific individuals they found. Indeed, they report that the Jennifer Aniston cell also responded strongly to an image of a different actress, Lisa Kudrow, who appeared in ‘Friends’ along with Jennifer Aniston.



So, the empirical evidence is still on the side of activity in sets of neurons as representing specific symbols (including those standing for specific individuals) rather than individual cells standing for specific symbols.



cheers



Bob





<image001.jpg>

<image002.png>

<image003.png>

<image004.jpg>

Professor of Psychology, University of Durham.

Durham PaleoPsychology Group.

Durham Centre for Vision and Visual Cognition.

Durham Centre for Visual Arts and Culture.



<image005.jpg>

Fellow.

Canadian Institute for Advanced Research,

Brain, Mind & Consciousness Programme.







Department of Psychology,

University of Durham,

Durham DH1 3LE, UK.



p: +44 191 334 3261

f: +44 191 334 3434











From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> on behalf of Gary Marcus <gary.marcus at nyu.edu>
Date: Wednesday, 21 February 2024 at 05:49
To: Laurent Mertens <laurent.mertens at kuleuven.be>
Cc: connectionists at mailman.srv.cs.cmu.edu <connectionists at mailman.srv.cs.cmu.edu>
Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI

[EXTERNAL EMAIL]

Deeply disappointing that someone would try to inject actual empirical evidence into this discussion. 😂



On Feb 20, 2024, at 08:41, Laurent Mertens <laurent.mertens at kuleuven.be> wrote:



Reacting to your statement:

"However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol."



What about the Grandmother/Jennifer Aniston/Halle Berry neuron?

(See, e.g., https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.caltech.edu_about_news_single-2Dcell-2Drecognition-2Dhalle-2Dberry-2Dbrain-2Dcell-2D1013&d=DwMFAw&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=it3XOFrc2yBru1bmF9dud4UoT60mjmur8mR3zGu365JPKmtWSuFnJTxRJOV4WSpa&s=kh-rqxQw6qcxbM8bhUYTHNaJHN5jtc3SLI5RXC5XgWA&e=>)



KR,

Laurent



________________________________

From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> on behalf of Weng, Juyang <weng at msu.edu>
Sent: Monday, February 19, 2024 11:11 PM
To: Michael Arbib <arbib at usc.edu>; connectionists at mailman.srv.cs.cmu.edu <connectionists at mailman.srv.cs.cmu.edu>
Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI



Dear Michael,

    You wrote, "Your brain did not deal with symbols?"

    I have my Conscious Learning (DN-3) model that tells me:
    My brain "deals with symbols" that are sensed from the extra-body world by the brain's sensors and effecters.

     However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol.   In this sense,  the brain does not have any symbol in the skull.

    This is my educated hypothesis.  The DN-3 brain does not need any symbol inside the skull.

    In this sense, almost all neural network models are flawed about the brain, as long as they have a block diagram where each block corresponds to a function concept in the extra-body world.  I am sorry to say that, which may make many enemies.

    Best regards,

-John

________________________________

From: Michael Arbib <arbib at usc.edu>
Sent: Monday, February 19, 2024 1:28 PM
To: Weng, Juyang <weng at msu.edu>; connectionists at mailman.srv.cs.cmu.edu <connectionists at mailman.srv.cs.cmu.edu>
Subject: RE: Connectionists: Early history of symbolic and neural network approaches to AI



So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols?



From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> On Behalf Of Weng, Juyang
Sent: Monday, February 19, 2024 8:07 AM
To: connectionists at mailman.srv.cs.cmu.edu
Subject: Connectionists: Early history of symbolic and neural network approaches to AI



I do not agree with Newell and Simon if they wrote that.   Otherwise, images and video are also symbols.  They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20240222/fbe54dfc/attachment.html>


More information about the Connectionists mailing list