Connectionists: Early history of symbolic and neural network approaches to AI
Tsvi Achler
achler at gmail.com
Wed Feb 21 13:50:50 EST 2024
I am glad we are circling back to a discussion of grandmother cells.
I dont think we will have a conclusion here but here are a few thoughts to
consider:
Classical feedforward methods (and their associated gradient descent
training paradigm) do not seem flexible enough to easily model how there
can be:
1) grandmother cells and neuron redundancy at the same time.
2) a way to describe input expectations, "likelihood" if you will.
3) learning that is truly one-shot of grandmother cells: that does not
require iid rehearsal to avoid catastrophic forgetting.
Sincerely,
-Tsvi
On Wed, Feb 21, 2024 at 4:43 AM Jeffrey Bowers <J.Bowers at bristol.ac.uk>
wrote:
> It is possible to define a grandmother cell in a way that falsifies them.
> For instance, defining grandmother cells as single neurons that only
> *respond* to inputs from one category. Another definition that is more
> plausible is single neurons that only *represent* one category. In
> psychology there are “localist” models that have single units that
> represent one category (e.g., there is a unit in the Interactive Activation
> Model that codes for the word DOG). And a feature of localist codes is
> that they are partly activated by similar inputs. So a DOG detector is
> partly activated by the input HOG by virtue of sharing two letters. But
> that partial activation of the DOG unit from HOG is no evidence against a
> localist or grandmother cell representation of the word DOG in the IA
> model. Just as a simple cell of a vertical line is partly activated by a
> line 5 degrees off vertical – that does not undermine the hypothesis that
> the simple cell *represents* vertical lines. I talk about the
> plausibility of Grandmother cells and discuss the Aniston cells in a paper
> I wrote sometime back:
>
>
>
> Bowers, J. S. (2009). On the biological plausibility of grandmother cells:
> implications for neural network theories in psychology and neuroscience. *Psychological
> review*, *116*(1), 220.
>
>
>
>
>
> *From: *Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> on
> behalf of KENTRIDGE, ROBERT W. <robert.kentridge at durham.ac.uk>
> *Date: *Wednesday, 21 February 2024 at 11:48
> *To: *Gary Marcus <gary.marcus at nyu.edu>, Laurent Mertens <
> laurent.mertens at kuleuven.be>
> *Cc: *connectionists at mailman.srv.cs.cmu.edu <
> connectionists at mailman.srv.cs.cmu.edu>
> *Subject: *Re: Connectionists: Early history of symbolic and neural
> network approaches to AI
>
> I agree – empirical evidence is just what we need in this
> super-interesting discussion.
>
>
>
> I should point out a few things about the Quiroga et al 2005 ‘Jennifer
> Aniston cell’ finding (*Nature*, *435*. 1102 - 1107 ).
>
>
>
> Quiroga et al themselves are at pains to point out that whilst the cells
> they found responded to a wide variety of depictions of specific
> individuals they were not ‘Grandmother cells’ as defined by Jerry Lettvin –
> that is, specific cells that respond to a broad range of depictions of an
> individual and **only** of that individual, meaning that one can infer
> that this individual is being perceived, thought of, etc. whenever that
> cell is active.
>
>
>
> The cells Quiroga found do, indeed, respond to remarkably diverse ranges
> of stimuli depicting individuals, including not just photos in different
> poses, at different ages, in different costumes (including Hale Berry as
> Catwoman for the Hale Berry cell), but also names presented as text (e.g.
> ‘HALE BERRY’). Quiroga et al only presented stimuli representing a
> relatively small range of individuals and so it is unsafe to conclude that
> the cells they found respond **only** to the specific individuals they
> found. Indeed, they report that the Jennifer Aniston cell also responded
> strongly to an image of a different actress, Lisa Kudrow, who appeared in
> ‘Friends’ along with Jennifer Aniston.
>
>
>
> So, the empirical evidence is still on the side of activity in sets of
> neurons as representing specific symbols (including those standing for
> specific individuals) rather than individual cells standing for specific
> symbols.
>
>
>
> cheers
>
>
>
> Bob
>
>
>
>
>
> [image: Image result for university of durham logo] [image:
> signature_2975123418] [image: signature_2364801924] [image: Image
> result for durham cvac]
>
> Professor of Psychology, University of Durham.
>
> Durham PaleoPsychology Group.
>
> Durham Centre for Vision and Visual Cognition.
>
> Durham Centre for Visual Arts and Culture.
>
>
>
> [image: 9k=]
>
> Fellow.
>
> Canadian Institute for Advanced Research,
>
> Brain, Mind & Consciousness Programme.
>
>
>
>
>
>
>
> Department of Psychology,
>
> University of Durham,
>
> Durham DH1 3LE, UK.
>
>
>
> p: +44 191 334 3261
>
> f: +44 191 334 3434
>
>
>
>
>
>
>
>
>
>
>
> *From: *Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> on
> behalf of Gary Marcus <gary.marcus at nyu.edu>
> *Date: *Wednesday, 21 February 2024 at 05:49
> *To: *Laurent Mertens <laurent.mertens at kuleuven.be>
> *Cc: *connectionists at mailman.srv.cs.cmu.edu <
> connectionists at mailman.srv.cs.cmu.edu>
> *Subject: *Re: Connectionists: Early history of symbolic and neural
> network approaches to AI
>
> *[EXTERNAL EMAIL]*
>
> Deeply disappointing that someone would try to inject actual empirical
> evidence into this discussion. 😂
>
>
>
> On Feb 20, 2024, at 08:41, Laurent Mertens <laurent.mertens at kuleuven.be>
> wrote:
>
>
>
> Reacting to your statement:
>
> "However, inside the skull of my brain, there are not any neurons that
> have a one-to-one correspondence to the symbol."
>
>
>
> What about the Grandmother/Jennifer Aniston/Halle Berry neuron?
>
> (See, e.g.,
> https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.caltech.edu_about_news_single-2Dcell-2Drecognition-2Dhalle-2Dberry-2Dbrain-2Dcell-2D1013&d=DwMFAw&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=it3XOFrc2yBru1bmF9dud4UoT60mjmur8mR3zGu365JPKmtWSuFnJTxRJOV4WSpa&s=kh-rqxQw6qcxbM8bhUYTHNaJHN5jtc3SLI5RXC5XgWA&e=>
> )
>
>
>
> KR,
>
> Laurent
>
>
> ------------------------------
>
> *From:* Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> on
> behalf of Weng, Juyang <weng at msu.edu>
> *Sent:* Monday, February 19, 2024 11:11 PM
> *To:* Michael Arbib <arbib at usc.edu>; connectionists at mailman.srv.cs.cmu.edu
> <connectionists at mailman.srv.cs.cmu.edu>
> *Subject:* Re: Connectionists: Early history of symbolic and neural
> network approaches to AI
>
>
>
> Dear Michael,
>
> You wrote, "Your brain did not deal with symbols?"
>
> I have my Conscious Learning (DN-3) model that tells me:
> My brain "deals with symbols" that are sensed from the extra-body
> world by the brain's sensors and effecters.
>
> However, inside the skull of my brain, there are not any neurons that
> have a one-to-one correspondence to the symbol. In this sense, the brain
> does not have any symbol in the skull.
>
> This is my educated hypothesis. The DN-3 brain does not need any
> symbol inside the skull.
>
> In this sense, almost all neural network models are flawed about the
> brain, as long as they have a block diagram where each block corresponds to
> a function concept in the extra-body world. I am sorry to say that, which
> may make many enemies.
>
> Best regards,
>
> -John
> ------------------------------
>
> *From:* Michael Arbib <arbib at usc.edu>
> *Sent:* Monday, February 19, 2024 1:28 PM
> *To:* Weng, Juyang <weng at msu.edu>; connectionists at mailman.srv.cs.cmu.edu <
> connectionists at mailman.srv.cs.cmu.edu>
> *Subject:* RE: Connectionists: Early history of symbolic and neural
> network approaches to AI
>
>
>
> So you believe that, as you wrote out these words, the neural networks in
> your brain did not deal with symbols?
>
>
>
> *From:* Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> *On
> Behalf Of *Weng, Juyang
> *Sent:* Monday, February 19, 2024 8:07 AM
> *To:* connectionists at mailman.srv.cs.cmu.edu
> *Subject:* Connectionists: Early history of symbolic and neural network
> approaches to AI
>
>
>
> I do not agree with Newell and Simon if they wrote that. Otherwise,
> images and video are also symbols. They probably were not sophisticated
> enough in 1976 to realize why neural networks in the brain should not
> contain or deal with symbols.
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20240221/fe42758f/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.jpg
Type: image/jpeg
Size: 22984 bytes
Desc: not available
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20240221/fe42758f/attachment.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image002.png
Type: image/png
Size: 226645 bytes
Desc: not available
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20240221/fe42758f/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image003.png
Type: image/png
Size: 22619 bytes
Desc: not available
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20240221/fe42758f/attachment-0001.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image004.jpg
Type: image/jpeg
Size: 8560 bytes
Desc: not available
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20240221/fe42758f/attachment-0001.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image005.jpg
Type: image/jpeg
Size: 5741 bytes
Desc: not available
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20240221/fe42758f/attachment-0002.jpg>
More information about the Connectionists
mailing list