<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<!--[if !mso]><style>v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style><![endif]--><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:Aptos;
panose-1:2 11 0 4 2 2 2 2 2 4;}
@font-face
{font-family:"Apple Color Emoji";
panose-1:0 0 0 0 0 0 0 0 0 0;}
@font-face
{font-family:"Lucida Grande";
panose-1:2 11 6 0 4 5 2 2 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
font-size:10.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
span.apple-converted-space
{mso-style-name:apple-converted-space;}
span.EmailStyle240
{mso-style-type:personal-reply;
font-family:"Aptos",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;
mso-ligatures:none;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="EN-US" link="blue" vlink="purple" style="word-wrap:break-word">
<div class="WordSection1">
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">Dear Danny,<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">Thanks for your kind comments about my recent article on how children learn language meanings.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">I agree that LLMs plus audio and visual information are better than those without them.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">On the other hand, if one’s goal is incrementally achieve anything like biological INTELLIGENCE, then one really needs self-organizing models that can learn incrementally about
changing environments in real time, and can do so quickly without suffering from catastrophic forgetting.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">In other words, such models solve what I call the
<b>stability-plasticity dilemma</b>, as Adaptive Resonance Theory and my other learning models do. Without that, they will not perform well on their own in unexpected environments, which is what one needs in mobile robots that one would like to achieve some
degree of autonomy.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">Moreover, my 1980
<b>thought experiment about how ANY system can AUTONOMOUSLY learn to correct predictive errors in a changing environment that is filled with unexpected events
</b>derives ART systems as the unique class of systems that can solve the stability-plasticity dilemma.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">The phrase
<b>Autonomous Adaptive Intelligence</b> summarizes in a simple phrase what my models try to achieve.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">In this regard, there are<b> design principles
</b>from which I derived the language learning part of my 2023 article on learning language meaning. These principles include the proper design of
<b>working memories</b> to temporarily store sequences of experienced items or events, and the proper design of the sequence learning mechanisms that are needed to unitize these sequential contingencies, much as we learn words, sentences, skills, and navigational
routes…and do so without experiencing catastrophic forgetting.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">The working memories
<b>are designed</b> to enable such sequence learning and stable memory to occur!<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">As a result, all linguistic, spatial, and motor sequences are stored using the same kind of neural network design for working memory. It is a shared UNIVERSAL design for working
memories which enables all these kinds of information to be seamlessly combined, as needed, to make decisions, predictions, and actions work well.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">It is also a design that occurs in one form or another in multiple parts of our brains: a
<b>recurrent, shunting, on-center, off-surround network</b>. <o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">What turns any such network into a working memory are the modulatory mechanisms that enable sequences to be stored, performed, and reset.
<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">Remarkably, evolutionary precursors of these modulatory mechanisms already exist in crustacea!<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">My Magnum Opus<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"><a href="https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552">https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552</a><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">explains what these design principles and networks are, and how they work.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">Best again,<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">Steve<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"><o:p> </o:p></span></p>
<div id="mail-editor-reference-message-container">
<div>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal" style="margin-bottom:12.0pt"><b><span lang="EN-CA" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">From:
</span></b><span lang="EN-CA" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">Danny Silver <danny.silver@acadiau.ca><br>
<b>Date: </b>Sunday, February 25, 2024 at 7:48</span><span lang="EN-CA" style="font-size:12.0pt;font-family:"Arial",sans-serif;color:black"> </span><span lang="EN-CA" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">PM<br>
<b>To: </b>Grossberg, Stephen <steve@bu.edu>, Jeffrey Bowers <J.Bowers@bristol.ac.uk>, KENTRIDGE, ROBERT W. <robert.kentridge@durham.ac.uk>, Gary Marcus <gary.marcus@nyu.edu>, Laurent Mertens <laurent.mertens@kuleuven.be><br>
<b>Cc: </b>connectionists@mailman.srv.cs.cmu.edu <connectionists@mailman.srv.cs.cmu.edu>, Grossberg, Stephen <steve@bu.edu><br>
<b>Subject: </b>Re: Connectionists: Early history of symbolic and neural network approaches to AI<o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span lang="EN-CA" style="font-size:11.0pt">Dear Steve .. Thank you for these wonderful responses. I did start reading your
<span style="color:#212121">Magnum Opus two years ago but was not able to finish it because of other responsibilities at the time. Your words have encouraged me to have another look.
</span></span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA" style="font-size:11.0pt">And I will most definitely have a deeper look at your
</span><i><span style="font-size:11.0pt"><a href="https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2023.1216479/full">Frontiers in Psychology<span style="font-style:normal">, August 2, 2023 article</span></a></span></i><span style="font-size:11.0pt">.
</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt">From what I have read so far, the following sentences seem to agree very much with our article
</span><span lang="EN-CA" style="font-size:11.0pt"><a href="https://arxiv.org/abs/2304.13626">https://arxiv.org/abs/2304.13626</a></span><span style="font-size:11.0pt">:
</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt">“</span><span lang="EN-CA" style="font-size:11.0pt">More generally, all the learning that is important for a child's understanding and survival requires interactions between multiple brain regions.” ;</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA" style="font-size:11.0pt">“.. a child can learn that specific language phrases and sentences strongly correlate with specific visual objects and events that the child is simultaneously watching a teacher use or perform.”
; and </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA" style="font-size:11.0pt"> “Language meaning is thus embodied in the interactions between a language utterance and the perceptual and affective experiences with which it is correlated, ..”</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt">I do not mean to suggest that LLMs are the definitive way forward. In fact, I agree that the early LLMs were/are deficient in terms of their focus on word tokens and the statistical relations between tokens.
I believe this is what leads you to say in the <i>Frontiers in Psychology</i> article the following: “</span><span lang="EN-CA" style="font-size:11.0pt">Perhaps most importantly, and central to the main theme of the current article, ChatGPT does not know
the real-world meaning of its predictions.”</span><span style="font-size:11.0pt"> However, I am intrigue by LLMs ability to process multiple modalities of data, and agree with you that this is key to developing an agent that can “know” things; i.e. develop
conceptual representations with semantical relations between such representations that are grounded in interaction with the real world. The most recent LLM-like models, particularly those being embedded in robotics, are starting to also accept and process
audio and video data as well as force and tactile sensory data for aspects of </span>
<span lang="EN-CA" style="font-size:11.0pt">perception, decision-making, control, and interaction [See the survey articles
<a href="https://arxiv.org/pdf/2311.07226.pdf">https://arxiv.org/pdf/2311.07226.pdf</a>, and
<a href="https://arxiv.org/abs/2311.12320">https://arxiv.org/abs/2311.12320</a>]. This does seem like a significant step forward. And it may very well be possible to do the same or better using ART network architectures versus LLM architectures. I suspect
there will be at least a dozen different but significant architectures tried over the next five years as we move more deeply into embodied AI (robotics).
</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA" style="font-size:11.0pt;color:black">… Danny
</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<div id="mail-editor-reference-message-container">
<div>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal" style="margin-bottom:12.0pt"><b><span style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">From:
</span></b><span style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">Grossberg, Stephen <steve@bu.edu><br>
<b>Date: </b>Sunday, February 25, 2024 at 4:39 PM<br>
<b>To: </b>Danny Silver <danny.silver@acadiau.ca>, Jeffrey Bowers <J.Bowers@bristol.ac.uk>, KENTRIDGE, ROBERT W. <robert.kentridge@durham.ac.uk>, Gary Marcus <gary.marcus@nyu.edu>, Laurent Mertens <laurent.mertens@kuleuven.be><br>
<b>Cc: </b>connectionists@mailman.srv.cs.cmu.edu <connectionists@mailman.srv.cs.cmu.edu>, Grossberg, Stephen <steve@bu.edu><br>
<b>Subject: </b>Re: Connectionists: Early history of symbolic and neural network approaches to AI</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<table class="MsoNormalTable" border="0" cellpadding="0" style="background:#FDF5A7">
<tbody>
<tr>
<td style="padding:.75pt .75pt .75pt .75pt">
<p class="MsoNormal"><strong><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:black">CAUTION:
</span></strong><span style="font-size:11.0pt;color:black">This email comes from outside Acadia. Verify the sender and use caution with any requests, links or attachments.</span><o:p></o:p></p>
</td>
</tr>
</tbody>
</table>
<div>
<div>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">Dear Danny,</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">I looked at your article to see how you are using the term “concept”. Here’s a quote from it:</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">“Definition 1. A concept is an object, a collection of objects, or an abstract idea that can be learned and represented by an intelligent agent. Concepts may range from specific
physical objects (“that hockey puck”), to a category of objects (“birds”), to very abstract and semantically complex ideas (“blue”, “top”, “justice”, “try”, “meaning”). More complex concepts can be built out of multiple more primitive concepts “girl riding
a bike”, “writing a technical paper”).”</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">Concepts like “specific physical objects” and “a category of objects” are regularly modeled by available neural network models like Adaptive Resonance Theory.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">The items that you group together that you call “very abstract and semantically complex ideas” vary greatly in their abstractness, such as “blue” vs. “top”. In all of them, however,
you are tacitly assuming an agent that has learned language and its meanings, at least if you literally want to use words to express these concepts.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">The following competence is within the capability of current models:
</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">Prime the model to store a representation of the color (not the name) “blue” in short-term memory, and then use eye movements to sequentially shift attention to the objects in
a scene until it resonates on an object that is blue.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">Here are a few articles that model how our brains can do that kind of thing:</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">Huang, T.-R., and Grossberg, S. (2010). Cortical dynamics of contextually cued attentive visual learning and search: Spatial and object evidence accumulation. <i>Psychological
Review</i>, <b>117(4)</b>, 1080-1112. </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"><a href="https://sites.bu.edu/steveg/files/2016/06/HuangGrossberg2010PR.pdf">https://sites.bu.edu/steveg/files/2016/06/HuangGrossberg2010PR.pdf</a></span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:#555555;background:white">Grossberg, S., & Huang, T.-R. (2009). ARTSCENE: A neural system for natural scene classification. <em><span style="font-family:"Aptos",sans-serif">Journal
of Vision</span></em>, <strong><span style="font-family:"Aptos",sans-serif">9</span></strong>(4):6, 1-19,</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"><a href="https://jov.arvojournals.org/article.aspx?articleid=2193487">https://jov.arvojournals.org/article.aspx?articleid=2193487</a></span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:#555555;background:white">Foley, N.C., Grossberg, S. and Mingolla, E. (2012). Neural dynamics of object-based multifocal visual spatial attention and priming: Object cueing,
useful-field-of-view, and crowding. <em><span style="font-family:"Aptos",sans-serif">Cognitive Psychology</span></em>, <strong><span style="font-family:"Aptos",sans-serif">65</span></strong>, 77-117.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:#555555;background:white"><a href="https://sites.bu.edu/steveg/files/2016/06/FolGroMin2012.pdf">https://sites.bu.edu/steveg/files/2016/06/FolGroMin2012.pdf</a> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">A concept like “justice” is hard for most humans to even define. I do not think that it is a good example of someone one would expect a model to learn.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">A concept like “meaning” needs to be operationalized. I have begun to do that in the following recent article. I welcome others who want to more deeply understand how we learn
language meanings to further develop or revise this model. This model requires the integration of a lot of previously defined and computationally simulated models. I illustrate that by copying its Abstract below.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">The model explains how we learn the meaning of phrases and sentences like the following one, which you noted in your email: “girl riding a bike”.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">I call the model ChatSOME, where SOME abbreviates Self-Organizing MEaning, because the model uses the kinds of processes that are needed to replace Generative AI models like
ChatGPT, which literally do not know what they are talking about.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">Grossberg, S. (2023). How children learn to understand language meanings: A neural model of adult–child multimodal interactions in real-time. <i>Frontiers in Psychology</i>,
August 2, 2023. Section on <i>Cognitive Science</i>, Volume 14.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"><a href="https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2023.1216479/full">https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2023.1216479/full</a></span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">“This article describes a biological neural network model that can be used to explain how children learn to understand language meanings about the perceptual and affective events
that they consciously experience. This kind of learning often occurs when a child interacts with an adult teacher to learn language meanings about events that they experience together. Multiple types of self-organizing brain processes are involved in learning
language meanings, including processes that control conscious visual perception, joint attention, object learning and conscious recognition, cognitive working memory, cognitive planning, emotion, cognitive-emotional interactions, volition, and goal-oriented
actions. The article shows how all of these brain processes interact to enable the learning of language meanings to occur. The article also contrasts these human capabilities with AI models such as ChatGPT. The current model is called the ChatSOME model, where
SOME abbreviates Self-Organizing MEaning.”</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">Best,</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">Steve</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<div id="mail-editor-reference-message-container">
<div>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal" style="margin-bottom:12.0pt"><b><span lang="EN-CA" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">From:
</span></b><span lang="EN-CA" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">Danny Silver <danny.silver@acadiau.ca><br>
<b>Date: </b>Sunday, February 25, 2024 at 1:40</span><span lang="EN-CA" style="font-size:12.0pt;font-family:"Arial",sans-serif;color:black"> </span><span lang="EN-CA" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">PM<br>
<b>To: </b>Jeffrey Bowers <J.Bowers@bristol.ac.uk>, Grossberg, Stephen <steve@bu.edu>, KENTRIDGE, ROBERT W. <robert.kentridge@durham.ac.uk>, Gary Marcus <gary.marcus@nyu.edu>, Laurent Mertens <laurent.mertens@kuleuven.be><br>
<b>Cc: </b>connectionists@mailman.srv.cs.cmu.edu <connectionists@mailman.srv.cs.cmu.edu><br>
<b>Subject: </b>Re: Connectionists: Early history of symbolic and neural network approaches to AI</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span lang="EN-CA" style="font-size:11.0pt">Jeff … Thanks for this. I can see how a local encoding of a concept into multiple cells can build some redundancy.
</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA" style="font-size:11.0pt">But this seems like an inefficient and ineffective use of a finite representational space (and associated energy) for learn the wide variety of concepts in the world. Can you explain the efficacy
of a grandmother cell encoding (regardless if it is a single cell or multiple cell that do not use a distributed encoding) for a variety of related concepts such as cats that may vary small to large in size, vary in colour, and vary in their relation to humans
(house cats, barn cats, mountain lions, cheetas, lions). Would there be a different grandmother cell (or collections of such cells) encoding for each cat type? And if each type of cat is associated with a richer set of modal representation from various
regions of the brain encoding features such as shape, colour, smell, emotion, does a grandmother cell encoding not seem redundant and brittle as compared to a distributed representation that summarizes aspects of these modalities. There is also the issue
of a fluid topology over this set of concepts that allows a house cat to morphe into a barn cat to morphe into a larger wild animal based on changes in the modal features. And we know this topology changes over time as humans experience more of the world.
Initially, when we are children, such topologies are incorrect but of no consequent because our parents know better, later after learning more about the family of concepts we fill in the details changing the topology of the concepts to allow us to survive
on our own. </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA" style="font-size:11.0pt">But perhaps the discussion here hinges on the difference between the representation of concepts and the representation of symbols. As we describe in <a href="https://arxiv.org/abs/2304.13626">https://arxiv.org/abs/2304.13626</a>
there is the difference between concepts and symbols that refer to concepts. Concepts are complex and messy, but all animals are able to learn concepts – most importantly, the things they like to eat and the things that can eat them and the relations between
such are also concepts. Symbols on the other hand seem to be learned and used by only a few species on the planet. Symbols capture crude but important aspects of concepts and provide tools by which intelligent agents can communicate, with some difficulty.
</span><span lang="EN-CA" style="font-family:"Lucida Grande",sans-serif;color:black;background:white">Our hypothesis and associated architecture imply that symbols will remain critical to the future of intelligent systems NOT because they are the fundamental
building blocks of thought, but because they are characterizations of subsymbolic processes that constitute thought.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA" style="font-size:11.0pt;color:black">… Danny
</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-CA" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<div id="mail-editor-reference-message-container">
<div>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal" style="margin-bottom:12.0pt"><b><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">From:
</span></b><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">Jeffrey Bowers <J.Bowers@bristol.ac.uk><br>
<b>Date: </b>Sunday, February 25, 2024 at 1:26 PM<br>
<b>To: </b>Danny Silver <danny.silver@acadiau.ca>, Grossberg, Stephen <steve@bu.edu>, KENTRIDGE, ROBERT W. <robert.kentridge@durham.ac.uk>, Gary Marcus <gary.marcus@nyu.edu>, Laurent Mertens <laurent.mertens@kuleuven.be><br>
<b>Cc: </b>connectionists@mailman.srv.cs.cmu.edu <connectionists@mailman.srv.cs.cmu.edu><br>
<b>Subject: </b>Re: Connectionists: Early history of symbolic and neural network approaches to AI</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<table class="MsoNormalTable" border="0" cellpadding="0" style="background:#FDF5A7">
<tbody>
<tr>
<td style="padding:.75pt .75pt .75pt .75pt">
<p class="MsoNormal"><strong><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:black">CAUTION:
</span></strong><span style="font-size:11.0pt;color:black">This email comes from outside Acadia. Verify the sender and use caution with any requests, links or attachments.</span><o:p></o:p></p>
</td>
</tr>
</tbody>
</table>
<div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Hi Danny, again, this is defining grandmother cells in a narrow way that they are easily dismissed, and the objections you cite have been discussed in detail in many papers in the past. Grossberg
has already addressed some of your points, but let me just briefly comment on the first – the worry that damage to neurons is problematic for grandmother cells as there needs to be redundancy. This leads you to conclude distributed representations are necessary.
</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">But there is nothing about redundancy that is inconsistent with grandmother cells. I consider this in detail in Bowers (2009) Psychological Review paper I referred to before, and here is just
one brief quote from the paper:</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">“But more important, even if it is granted that individual neurons are not sufficiently reliable to code for high-level perceptual tasks, it does not follow that some form of population code is
required. Instead, all that is required is (again) redundant grandmother cells that code for the same stimulus. If one neuron fails to respond to the stimulus on a given trial due to noise, another one (or many) equivalent ones will, in what Barlow (1995)
called “probability summation.” </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Indeed, ART can learn redundant grandmother cells, based on the vigilance parameter. If it set to the limit, the model effectively learns a localist grandmother cell each time a word or a face
is encoded (and instance theory).</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">The problem with so quickly dismissing grandmother cells is that researchers then reject out of hand important models like ART. I first got interested in the topic as researchers would just reject
all sorts of models in psychology because they did not include distributed representations like those learned in the PDP models of the time. And researchers are so sure of themselves that they do not even consider entire classes of models, or read critiques
that address all the standard points people make regarding grandmother cells.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Jeff </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<div id="mail-editor-reference-message-container">
<div>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal" style="margin-bottom:12.0pt"><b><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">From:
</span></b><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">Danny Silver <danny.silver@acadiau.ca><br>
<b>Date: </b>Sunday, 25 February 2024 at 03:13<br>
<b>To: </b>Jeffrey Bowers <J.Bowers@bristol.ac.uk>, Grossberg, Stephen <steve@bu.edu>, KENTRIDGE, ROBERT W. <robert.kentridge@durham.ac.uk>, Gary Marcus <gary.marcus@nyu.edu>, Laurent Mertens <laurent.mertens@kuleuven.be><br>
<b>Cc: </b>connectionists@mailman.srv.cs.cmu.edu <connectionists@mailman.srv.cs.cmu.edu><br>
<b>Subject: </b>Re: Connectionists: Early history of symbolic and neural network approaches to AI</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Dear Jeff, Stephen and others … The encoding of a concept or a symbol associated with a concept using a single neuron (grandmother cell) would be a poor choice both from a representational perspective
as well as from a functional perspective for a lifelong learning and reasoning agent. </span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">First and foremost, representational redundancy make sense for an agent that can suffer physical damage. Steve’s position in the email below seems to support this. It also makes sense to encode
representation in a distributed fashion for the purposes of new concept consolidation and fine tuning of existing concepts and its variants. This would seem fundamental for a lifelong agent that must learn, unlearn and relearn many concepts over time using
a finite amount of representation (memory).</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
</div>
<div>
<p class="MsoNormal" style="margin-bottom:12.0pt"><span lang="EN-GB"> </span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">From a functional perspective an intelligent agent “knows” concepts through the integration of several sensory and motor modalities that provide primary inputs as well as secondary contextual
information. When an intelligent agent thinks of a “cat” it does so in the context of hearing, seeing, chasing, touching, smelling the animal over a variety of experiences. I suspect this is related to Steve’s clarification of the complexity of what we see
happening in the human nervous system when representing a concept.</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal" style="margin-bottom:12.0pt"><span lang="EN-GB"> </span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Also note that, when you ask a child if the animal in front of her is a “cat” her response verbally or in writing is a complex sequence of motor signals that are more like a song than a single
representation. This is quite different from the simple one-hot encodings output by current ANNs. Such a complex output sequence could be activated by a signal neuron, but that is certainly not a requirement, nor does a grandmother cell seem likely if the
encoding of a concept is based on several sensory modalities that must deal with perceptual variations over time and space. </span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal" style="margin-bottom:12.0pt"><span lang="EN-GB"> </span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">My question, to those who believe that symbols and the concepts to which they refer are represented in a complex distributed manner, is the following: Are such representations likely to be static
in nature (e.g. a single activation within a small region of an embedding space), or are they likely to be dynamic in nature (e.g. a series of activations within a more complex temporal-spatial manifold of an emedding space).</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal" style="margin-bottom:12.0pt"><span lang="EN-GB"> </span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Danny Silver</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal" style="margin-bottom:12.0pt"><span lang="EN-GB"> </span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div id="ms-outlook-mobile-signature">
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Get <a href="https://aka.ms/o0ukef">
Outlook for iOS</a></span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div id="mail-editor-reference-message-container">
<div class="MsoNormal" align="center" style="text-align:center"><span lang="EN-GB" style="font-size:11.0pt">
<hr size="0" width="100%" align="center">
</span></div>
<div id="divRplyFwdMsg">
<p class="MsoNormal"><b><span lang="EN-GB" style="font-size:11.0pt">From:</span></b><span lang="EN-GB" style="font-size:11.0pt"> Connectionists <connectionists-bounces@mailman.srv.cs.cmu.edu> on behalf of Jeffrey Bowers <J.Bowers@bristol.ac.uk><br>
<b>Sent:</b> Saturday, February 24, 2024 5:06 PM<br>
<b>To:</b> Grossberg, Stephen <steve@bu.edu>; KENTRIDGE, ROBERT W. <robert.kentridge@durham.ac.uk>; Gary Marcus <gary.marcus@nyu.edu>; Laurent Mertens <laurent.mertens@kuleuven.be><br>
<b>Cc:</b> connectionists@mailman.srv.cs.cmu.edu <connectionists@mailman.srv.cs.cmu.edu><br>
<b>Subject:</b> Re: Connectionists: Early history of symbolic and neural network approaches to AI
</span><span lang="EN-CA"><o:p></o:p></span></p>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
</div>
<table class="MsoNormalTable" border="0" cellpadding="0" style="background:#FDF5A7">
<tbody>
<tr>
<td style="padding:.75pt .75pt .75pt .75pt">
<p class="MsoNormal"><strong><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:black">CAUTION:
</span></strong><span style="font-size:11.0pt;color:black">This email comes from outside Acadia. Verify the sender and use caution with any requests, links or attachments.</span><o:p></o:p></p>
</td>
</tr>
</tbody>
</table>
<div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">I think this is where terminology is confusing things. I agree that ART (and all
</span><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">other neural architectures) is “far from being a ‘grandmother cell’”. The question is whether a neural architecture includes grandmother cells – that is, a unit high in a hierarchy of units
that is used to classify objects. On distributed systems there is no such unit at any level of a hierarchy – it is patterns of activation all the way up. By contrast, on grandmother cell theories, there is an architecture that does include units that code
for an (abstract) category. Indeed, even all current fashionable DNNs include grandmother cells whenever they use “one hot encoding” of categories (which they almost always do).
</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">So, just as grandmother cells can easy be falsified if you define a grandmother cell that only
<b>responds</b> to one category of input, you can falsify a grandmother cells by claiming that it requires only one cell to be active in a network. The classic question was whether simple cells mapped onto complex cells, that mapped onto more complex cells,
that eventually mapped onto singe neurons that code for one category. I’m a big fan of ART models, and in my way of thinking, your models include grandmother cells (other than perhaps your distributed ART model, that I’m not so familiar with – but I’m thinking
that does not include a winner-take-all dynamic).</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">Jeff</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<div id="mail-editor-reference-message-container">
<div>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal" style="margin-bottom:12.0pt"><b><span style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">From:
</span></b><span style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">Grossberg, Stephen <steve@bu.edu><br>
<b>Date: </b>Saturday, 24 February 2024 at 16:46<br>
<b>To: </b>Jeffrey Bowers <J.Bowers@bristol.ac.uk>, KENTRIDGE, ROBERT W. <robert.kentridge@durham.ac.uk>, Gary Marcus <gary.marcus@nyu.edu>, Laurent Mertens <laurent.mertens@kuleuven.be><br>
<b>Cc: </b>connectionists@mailman.srv.cs.cmu.edu <connectionists@mailman.srv.cs.cmu.edu>, Grossberg, Stephen <steve@bu.edu><br>
<b>Subject: </b>Re: Connectionists: Early history of symbolic and neural network approaches to AI</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">Dear Jeff,</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">Thanks for your supportive remark.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">One thing to keep in mind is that, even if a recognition category has a compressed representation using a small, compact population of cells, a much larger population of cells
is needed for that category to work. </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">For starters, even a compact category representation is activated by a distributed pattern of activation across the network of feature-selective cells with which the category
resonates via excitatory feedback signals when it is chosen.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">In the case of invariant object categories, a widespread neural architecture is needed to learn it, including modulatory signals from the dorsal, or Where, cortical stream to
the ventral, or What, cortical stream where the category is being learned.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">These modulatory signals are needed to ensure that the invariant object category binds together only views that belong to that object, and not irrelevant features that may be
distributed across the scene.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">These modulatory signals also maintain spatial attention on the invariant category as it is being learned. I call the resonance that accomplishes this a surface-shroud resonance.
I propose that it occurs between cortical areas V4 and PPC and triggers a system-wide resonance at earlier and later cortical areas.
</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">Acting in space on the object that is recognized by the invariant category requires reciprocal What-to-Where stream interactions. These interactions embody a proposed solution
of the Where’s Waldo Problem.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">I have attached a couple of the figures that summarize the ARTSCAN Search architecture that tries to explain and simulate these interactions.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">This neural architecture is far from being a “grandmother cell”!</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">My Magnum Opus provides a lot more modeling explanations and data about these issues:</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"><a href="https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552">https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552</a></span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">Best again,
</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">Steve</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<div id="mail-editor-reference-message-container">
<div>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal" style="margin-bottom:12.0pt"><b><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">From:
</span></b><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">Jeffrey Bowers <J.Bowers@bristol.ac.uk><br>
<b>Date: </b>Saturday, February 24, 2024 at 4:38</span><span lang="EN-GB" style="font-size:12.0pt;font-family:"Arial",sans-serif;color:black"> </span><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">AM<br>
<b>To: </b>Grossberg, Stephen <steve@bu.edu>, KENTRIDGE, ROBERT W. <robert.kentridge@durham.ac.uk>, Gary Marcus <gary.marcus@nyu.edu>, Laurent Mertens <laurent.mertens@kuleuven.be><br>
<b>Cc: </b>connectionists@mailman.srv.cs.cmu.edu <connectionists@mailman.srv.cs.cmu.edu>, Grossberg, Stephen <steve@bu.edu><br>
<b>Subject: </b>Re: Connectionists: Early history of symbolic and neural network approaches to AI</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Dear Steve, I agree, the grandmother cell theory is ill defined, and it is often defined in such a way that it is false. But then people conclude from that that the brain encodes information
in a distributed manner, with each unit (neuron) coding for multiple different things. That conclusion is unjustified. I think your ART models provide an excellent example of one way to implement grandmother cell theories. ART can learn localist codes where
a single unit encodes an object in an abstract way. The Jennifer Aniston neuron results are entirely consistent with your models, even though a given neuron might respond above baseline to other inputs (at least prior to settling into a resonance). Jeff</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<div id="mail-editor-reference-message-container">
<div>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal" style="margin-bottom:12.0pt"><b><span style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">From:
</span></b><span style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">Grossberg, Stephen <steve@bu.edu><br>
<b>Date: </b>Friday, 23 February 2024 at 18:12<br>
<b>To: </b>Jeffrey Bowers <J.Bowers@bristol.ac.uk>, KENTRIDGE, ROBERT W. <robert.kentridge@durham.ac.uk>, Gary Marcus <gary.marcus@nyu.edu>, Laurent Mertens <laurent.mertens@kuleuven.be><br>
<b>Cc: </b>connectionists@mailman.srv.cs.cmu.edu <connectionists@mailman.srv.cs.cmu.edu>, Grossberg, Stephen <steve@bu.edu><br>
<b>Subject: </b>Re: Connectionists: Early history of symbolic and neural network approaches to AI</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">Dear Jeff et al.,</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">The term Grandmother Cell was a good heuristic but, as has been noted in this email thread, is also ill-defined.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">It is known that there are cells in anterior Inferotemporal Cortex (ITa) that may be called invariant object recognition categories because they respond to a visually perceived
object from multiple views, sizes, and positions.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">There are also view-specific categories in posterior Inferotemporal Cortex (ITp) that do not have such broad invariance.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">I list below several of our articles that model how invariant object categories and view-specific categories may be learned. We also use the modeling results to explain a lot
of data.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">Just a scan of the article titles illustrates that there has been a lot of work on this topic.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">Fazl, A., Grossberg, S., and Mingolla, E. (2009). View-invariant object category learning, recognition, and search: How spatial and object attention are coordinated using surface-based
attentional shrouds. <i>Cognitive Psychology</i>, <b>58</b>, 1-48. </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"><a href="https://sites.bu.edu/steveg/files/2016/06/FazGroMin2008.pdf">https://sites.bu.edu/steveg/files/2016/06/FazGroMin2008.pdf</a></span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">Cao, Y., Grossberg, S., and Markowitz, J. (2011). How does the brain rapidly learn and reorganize view- and positionally-invariant object representations in inferior temporal
cortex? <i>Neural Networks</i>, <b>24</b>, 1050-1061.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"><a href="https://sites.bu.edu/steveg/files/2016/06/NN2853.pdf">https://sites.bu.edu/steveg/files/2016/06/NN2853.pdf</a></span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">Grossberg, S., Markowitz, J., and Cao, Y. (2011). On the road to invariant recognition: Explaining tradeoff and morph properties of cells in inferotemporal cortex using multiple-scale
task-sensitive attentive learning. <i>Neural Networks</i>, <b>24</b>, 1036-1049.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"><a href="https://sites.bu.edu/steveg/files/2016/06/GroMarCao2011TR.pdf">https://sites.bu.edu/steveg/files/2016/06/GroMarCao2011TR.pdf</a></span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">Grossberg, S., Srinivasan, K., and Yazdabakhsh, A. (2011). On the road to invariant object recognition: How cortical area V2 transforms absolute to relative disparity during
3D vision. <i>Neural Networks</i>, <b>24</b>, 686-692. </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"><a href="https://sites.bu.edu/steveg/files/2016/06/GroSriYaz2011TR.pdf">https://sites.bu.edu/steveg/files/2016/06/GroSriYaz2011TR.pdf</a></span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">
</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">Foley, N.C., Grossberg, S. and Mingolla, E. (2012). Neural dynamics of object-based multifocal visual spatial attention and priming: Object cueing, useful-field-of-view, and
crowding. <i>Cognitive Psychology</i>, <b>65</b>, 77-117.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"><a href="https://sites.bu.edu/steveg/files/2016/06/FolGroMin2012.pdf">https://sites.bu.edu/steveg/files/2016/06/FolGroMin2012.pdf</a></span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">Grossberg, S., Srinivasan, K., and Yazdanbakhsh, A. (2014). Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with
eye movements. <i>Frontiers in Psychology: Perception Science, </i>doi: 10.3389/fpsyg.2014.01457</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"><a href="https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2014.01457/full">https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2014.01457/full</a></span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">More articles on related topics can be found on my web page sites.bu.edu/steveg, including how humans can search for an object at an expected position in space, even though its
invariant object category representation cannot be used to do so.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">Best,</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Aptos",sans-serif">Steve</span><span lang="EN-CA"><o:p></o:p></span></p>
<div id="mail-editor-reference-message-container">
<div>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal" style="margin-bottom:12.0pt"><b><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">From:
</span></b><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">Connectionists <connectionists-bounces@mailman.srv.cs.cmu.edu> on behalf of Jeffrey Bowers <J.Bowers@bristol.ac.uk><br>
<b>Date: </b>Thursday, February 22, 2024 at 11:11</span><span lang="EN-GB" style="font-size:12.0pt;font-family:"Arial",sans-serif;color:black"> </span><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">AM<br>
<b>To: </b>KENTRIDGE, ROBERT W. <robert.kentridge@durham.ac.uk>, Gary Marcus <gary.marcus@nyu.edu>, Laurent Mertens <laurent.mertens@kuleuven.be><br>
<b>Cc: </b>connectionists@mailman.srv.cs.cmu.edu <connectionists@mailman.srv.cs.cmu.edu><br>
<b>Subject: </b>Re: Connectionists: Early history of symbolic and neural network approaches to AI</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Good point, I should not have used simple cells as an example of grandmother cells. In fact, I agree that some sort of population coding is likely supporting our perception of orientation. For
example, simple cells are oriented in steps of about 5 degrees, but we can perceive orientations at a much finer granularity, so it must be a combination of cells driving our perception.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">The other reason I should have not used simple cells is that grandmother cells are a theory about how we identify familiar categories of objects (my grandmother, or a dog or a cat). Orientation
is a continuous dimension where distributed coding may be more suitable. The better example I gave is the word representation DOG in the IA model. The fact that the DOG detector is partly activated by the input CAT does not falsify the hypothesis that DOG
is locally coded. Indeed, it has hand-wired to be localist. In the same way, the fact that a Jennifer Aniston neuron might be weakly activated by another face does not rule out the hypothesis that the neuron selectively codes for Jennifer Aniston. I agree
it is not strong evidence for a grandmother cell – there may be other images that drive the neuron even more, we just don’t know given the limited number of images presented to the patient. But it is interesting that there are various demonstrations that
artificial networks learn grandmother cells under some conditions – when you can test the model on all the familiar categories it has seen. So, I would not rule out grandmother cells out of hand.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Jeff </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<div id="mail-editor-reference-message-container">
<div>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal" style="margin-bottom:12.0pt"><b><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">From:
</span></b><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">KENTRIDGE, ROBERT W. <robert.kentridge@durham.ac.uk><br>
<b>Date: </b>Wednesday, 21 February 2024 at 20:56<br>
<b>To: </b>Jeffrey Bowers <J.Bowers@bristol.ac.uk>, Gary Marcus <gary.marcus@nyu.edu>, Laurent Mertens <laurent.mertens@kuleuven.be><br>
<b>Cc: </b>connectionists@mailman.srv.cs.cmu.edu <connectionists@mailman.srv.cs.cmu.edu><br>
<b>Subject: </b>Re: Connectionists: Early history of symbolic and neural network approaches to AI</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Again, it is great to be examining the relationship between ‘real’ neural coding and the ins and outs of representation in ANNs. I’m really pleased to be able to make a few contributions to a
list which I’ve lurked on since the late 1980s!</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">I feel I should add an alternative interpretation of orientation coding in primary visual cortex to that so clearly explained by Jeffrey. It is, indeed, tempting to think of orientation tuned
cells as labelled lines or grandmother cells where we read off activity in individual cells as conveying the presence of a line segment with a specific orientation at a particular location in the visual field. As neuroscientists we can certainly do this. The
key question is whether brain areas outside primary visual cortex, which are consumers of information coded in primary visual cortex, also do this. The alternative view of orientation coding is that orientation is represented by a population code where orientation
is represented as the vector sum of orientation preferences in cells with many different orientation tunings, weighted by their levels of activity, and that it is this population code that is read by areas that are consumers of orientation information. The
notion of neural population coding of orientation was first tested electrophysiologically by Georgopoulos in 1982, examining population coding of the direction of arm movements in primary motor cortex. There is more recent psychophysical evidence that people’s
confidence in their judgements of the orientation of a visual stimulus can be predicted on the basis of a population coding scheme (Bays, 2016, A signature of neural coding at human perceptual limits. Journal of Vision,
</span><span lang="EN-GB"><a href="https://jov.arvojournals.org/article.aspx?articleid=2552242"><span style="font-size:11.0pt">https://jov.arvojournals.org/article.aspx?articleid=2552242</span></a></span><span lang="EN-GB" style="font-size:11.0pt">), where
a person’s judgment is indicative of the state of a high level consumer of orientation information.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">So again, I’d err on the side of suggesting that although we can conceive of single neurons in primary visual cortex as encoding information (maybe not really symbols in this case anyway), it
isn’t our ability to interpret things like this that matters, rather, it is the way the rest of the brain interprets information delivered by primary visual cortex.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">cheers,</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Bob</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"><img border="0" width="57" height="57" style="width:.5937in;height:.5937in" id="Picture_x0020_12" src="cid:image001.jpg@01DA64AF.5BBBA710" alt="Image result for university of durham logo">
<img border="0" width="118" height="55" style="width:1.2291in;height:.5729in" id="Picture_x0020_11" src="cid:image002.png@01DA64AF.5BBBA710" alt="signature_2025328812"> <img border="0" width="94" height="55" style="width:.9791in;height:.5729in" id="Picture_x0020_10" src="cid:image003.png@01DA64AF.5BBBA710" alt="signature_824875734"> <img border="0" width="46" height="56" style="width:.4791in;height:.5833in" id="Picture_x0020_9" src="cid:image004.jpg@01DA64AF.5BBBA710" alt="Image result for durham cvac"></span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Professor of Psychology, University of Durham.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Durham PaleoPsychology Group.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Durham Centre for Vision and Visual Cognition.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Durham Centre for Visual Arts and Culture.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"><img border="0" width="49" height="49" style="width:.5104in;height:.5104in" id="Picture_x0020_8" src="cid:image005.jpg@01DA64AF.5BBBA710" alt="9k="></span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Fellow. </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Canadian Institute for Advanced Research,
</span><span lang="EN-CA"><o:p></o:p></span></p>
<div style="border:none;border-bottom:solid windowtext 1.0pt;padding:0in 0in 1.0pt 0in">
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Brain, Mind & Consciousness Programme.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Department of Psychology,</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">University of Durham,</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Durham DH1 3LE, UK.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">p: +44 191 334 3261</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">f: +44 191 334 3434</span><span lang="EN-CA"><o:p></o:p></span></p>
<div style="border:none;border-bottom:solid windowtext 1.0pt;padding:0in 0in 1.0pt 0in">
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<div id="mail-editor-reference-message-container">
<div>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal" style="margin-bottom:12.0pt"><b><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">From:
</span></b><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">Jeffrey Bowers <J.Bowers@bristol.ac.uk><br>
<b>Date: </b>Wednesday, 21 February 2024 at 12:31<br>
<b>To: </b>KENTRIDGE, ROBERT W. <robert.kentridge@durham.ac.uk>, Gary Marcus <gary.marcus@nyu.edu>, Laurent Mertens <laurent.mertens@kuleuven.be><br>
<b>Cc: </b>connectionists@mailman.srv.cs.cmu.edu <connectionists@mailman.srv.cs.cmu.edu><br>
<b>Subject: </b>Re: Connectionists: Early history of symbolic and neural network approaches to AI</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><strong><span lang="EN-GB" style="font-size:12.0pt;font-family:"Calibri",sans-serif;color:black;background:#FFFECF">[EXTERNAL EMAIL]</span></strong><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<div>
<p class="MsoNormal"><span lang="EN-GB">It is possible to define a grandmother cell in a way that falsifies them. For instance, defining grandmother cells as single neurons that only *respond* to inputs from one category. Another definition that is more plausible
is single neurons that only *represent* one category. In psychology there are “localist” models that have single units that represent one category (e.g., there is a unit in the Interactive Activation Model that codes for the word DOG). And a feature of localist
codes is that they are partly activated by similar inputs. So a DOG detector is partly activated by the input HOG by virtue of sharing two letters. But that partial activation of the DOG unit from HOG is no evidence against a localist or grandmother cell
representation of the word DOG in the IA model. Just as a simple cell of a vertical line is partly activated by a line 5 degrees off vertical – that does not undermine the hypothesis that the simple cell *represents* vertical lines. I talk about the plausibility
of Grandmother cells and discuss the Aniston cells in a paper I wrote sometime back:</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-family:"Arial",sans-serif;color:#222222;background:white">Bowers, J. S. (2009). On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience.<span class="apple-converted-space"> </span></span><i><span lang="EN-GB" style="font-family:"Arial",sans-serif;color:#222222">Psychological
review</span></i><span lang="EN-GB" style="font-family:"Arial",sans-serif;color:#222222;background:white">,<span class="apple-converted-space"> </span></span><i><span lang="EN-GB" style="font-family:"Arial",sans-serif;color:#222222">116</span></i><span lang="EN-GB" style="font-family:"Arial",sans-serif;color:#222222;background:white">(1),
220.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<div id="mail-editor-reference-message-container">
<div>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal" style="margin-bottom:12.0pt"><b><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">From:
</span></b><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">Connectionists <connectionists-bounces@mailman.srv.cs.cmu.edu> on behalf of KENTRIDGE, ROBERT W. <robert.kentridge@durham.ac.uk><br>
<b>Date: </b>Wednesday, 21 February 2024 at 11:48<br>
<b>To: </b>Gary Marcus <gary.marcus@nyu.edu>, Laurent Mertens <laurent.mertens@kuleuven.be><br>
<b>Cc: </b>connectionists@mailman.srv.cs.cmu.edu <connectionists@mailman.srv.cs.cmu.edu><br>
<b>Subject: </b>Re: Connectionists: Early history of symbolic and neural network approaches to AI</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">I agree – empirical evidence is just what we need in this super-interesting discussion.
</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">I should point out a few things about the Quiroga et al 2005 ‘Jennifer Aniston cell’ finding (<i>Nature</i>, <b>435</b>. 1102 - 1107 ).
</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Quiroga et al themselves are at pains to point out that whilst the cells they found responded to a wide variety of depictions of specific individuals they were not ‘Grandmother cells’ as defined
by Jerry Lettvin – that is, specific cells that respond to a broad range of depictions of an individual and *<b>only</b>* of that individual, meaning that one can infer that this individual is being perceived, thought of, etc. whenever that cell is active.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">The cells Quiroga found do, indeed, respond to remarkably diverse ranges of stimuli depicting individuals, including not just photos in different poses, at different ages, in different costumes
(including Hale Berry as Catwoman for the Hale Berry cell), but also names presented as text (e.g. ‘HALE BERRY’). Quiroga et al only presented stimuli representing a relatively small range of individuals and so it is unsafe to conclude that the cells they
found respond *<b>only</b>* to the specific individuals they found. Indeed, they report that the Jennifer Aniston cell also responded strongly to an image of a different actress, Lisa Kudrow, who appeared in ‘Friends’ along with Jennifer Aniston.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">So, the empirical evidence is still on the side of activity in sets of neurons as representing specific symbols (including those standing for specific individuals) rather than individual cells
standing for specific symbols.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">cheers</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Bob</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"><img border="0" width="57" height="57" style="width:.5937in;height:.5937in" id="Picture_x0020_5" src="cid:image001.jpg@01DA64AF.5BBBA710" alt="Image result for university of durham logo">
<img border="0" width="118" height="55" style="width:1.2291in;height:.5729in" id="Picture_x0020_4" src="cid:image002.png@01DA64AF.5BBBA710" alt="signature_2975123418"> <img border="0" width="94" height="55" style="width:.9791in;height:.5729in" id="Picture_x0020_3" src="cid:image003.png@01DA64AF.5BBBA710" alt="signature_2364801924"> <img border="0" width="46" height="56" style="width:.4791in;height:.5833in" id="Picture_x0020_2" src="cid:image004.jpg@01DA64AF.5BBBA710" alt="Image result for durham cvac"></span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Professor of Psychology, University of Durham.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Durham PaleoPsychology Group.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Durham Centre for Vision and Visual Cognition.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Durham Centre for Visual Arts and Culture.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"><img border="0" width="49" height="49" style="width:.5104in;height:.5104in" id="Picture_x0020_1" src="cid:image005.jpg@01DA64AF.5BBBA710" alt="9k="></span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Fellow. </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Canadian Institute for Advanced Research,
</span><span lang="EN-CA"><o:p></o:p></span></p>
<div style="border:none;border-bottom:solid windowtext 1.0pt;padding:0in 0in 1.0pt 0in">
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Brain, Mind & Consciousness Programme.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Department of Psychology,</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">University of Durham,</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Durham DH1 3LE, UK.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">p: +44 191 334 3261</span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">f: +44 191 334 3434</span><span lang="EN-CA"><o:p></o:p></span></p>
<div style="border:none;border-bottom:solid windowtext 1.0pt;padding:0in 0in 1.0pt 0in">
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<div id="mail-editor-reference-message-container">
<div>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal" style="margin-bottom:12.0pt"><b><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">From:
</span></b><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">Connectionists <connectionists-bounces@mailman.srv.cs.cmu.edu> on behalf of Gary Marcus <gary.marcus@nyu.edu><br>
<b>Date: </b>Wednesday, 21 February 2024 at 05:49<br>
<b>To: </b>Laurent Mertens <laurent.mertens@kuleuven.be><br>
<b>Cc: </b>connectionists@mailman.srv.cs.cmu.edu <connectionists@mailman.srv.cs.cmu.edu><br>
<b>Subject: </b>Re: Connectionists: Early history of symbolic and neural network approaches to AI</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><strong><span lang="EN-GB" style="font-size:12.0pt;font-family:"Calibri",sans-serif;color:black;background:#FFFECF">[EXTERNAL EMAIL]</span></strong><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt">Deeply disappointing that someone would try to inject actual empirical evidence into this discussion.
</span><span lang="EN-GB" style="font-size:11.0pt;font-family:"Apple Color Emoji"">😂</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal" style="margin-bottom:12.0pt"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal" style="margin-bottom:12.0pt"><span lang="EN-GB" style="font-size:11.0pt">On Feb 20, 2024, at 08:41, Laurent Mertens <laurent.mertens@kuleuven.be> wrote:</span><span lang="EN-CA"><o:p></o:p></span></p>
</blockquote>
</div>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">Reacting to your statement:</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">"However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol."</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">What about the Grandmother/Jennifer Aniston/Halle Berry neuron?</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">(See, e.g.,
</span><span lang="EN-GB"><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.caltech.edu_about_news_single-2Dcell-2Drecognition-2Dhalle-2Dberry-2Dbrain-2Dcell-2D1013&d=DwMFAw&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=it3XOFrc2yBru1bmF9dud4UoT60mjmur8mR3zGu365JPKmtWSuFnJTxRJOV4WSpa&s=kh-rqxQw6qcxbM8bhUYTHNaJHN5jtc3SLI5RXC5XgWA&e="><span style="font-size:12.0pt;font-family:"Aptos",sans-serif">https://www.caltech.edu/about/news/single-cell-recognition-halle-berry-brain-cell-1013</span></a></span><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">)</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black"> </span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">KR,</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">Laurent</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black"> </span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div class="MsoNormal" align="center" style="text-align:center"><span lang="EN-GB" style="font-size:11.0pt">
<hr size="0" width="96%" align="center">
</span></div>
<div id="divRplyFwdMsg">
<p class="MsoNormal"><b><span lang="EN-GB" style="font-size:11.0pt;color:black">From:</span></b><span lang="EN-GB" style="font-size:11.0pt;color:black"> Connectionists <connectionists-bounces@mailman.srv.cs.cmu.edu> on behalf of Weng, Juyang <weng@msu.edu><br>
<b>Sent:</b> Monday, February 19, 2024 11:11 PM<br>
<b>To:</b> Michael Arbib <arbib@usc.edu>; connectionists@mailman.srv.cs.cmu.edu <connectionists@mailman.srv.cs.cmu.edu><br>
<b>Subject:</b> Re: Connectionists: Early history of symbolic and neural network approaches to AI</span><span lang="EN-GB" style="font-size:11.0pt">
</span><span lang="EN-CA"><o:p></o:p></span></p>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
</div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">Dear Michael,</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black"> You wrote, "Your brain did not deal with symbols?"</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black"> I have my Conscious Learning (DN-3) model that tells me:<br>
My brain "deals with symbols" that are sensed from the extra-body world by the brain's sensors and effecters.</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black"> However, inside the skull of my brain, there are not any neurons that have a one-to-one correspondence to the symbol. In this sense, the brain
does not have any symbol in the skull.</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black"> This is my educated hypothesis. The DN-3 brain does not need any symbol inside the skull.</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black"> In this sense, almost all neural network models are flawed about the brain, as long as they have a block diagram where each block corresponds to
a function concept in the extra-body world. I am sorry to say that, which may make many enemies. </span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black"> Best regards,</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:12.0pt;font-family:"Aptos",sans-serif;color:black">-John </span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<div class="MsoNormal" align="center" style="text-align:center"><span lang="EN-GB" style="font-size:11.0pt">
<hr size="0" width="96%" align="center">
</span></div>
<div id="x_divRplyFwdMsg">
<p class="MsoNormal"><b><span lang="EN-GB" style="font-size:11.0pt;color:black">From:</span></b><span lang="EN-GB" style="font-size:11.0pt;color:black"> Michael Arbib <arbib@usc.edu><br>
<b>Sent:</b> Monday, February 19, 2024 1:28 PM<br>
<b>To:</b> Weng, Juyang <weng@msu.edu>; connectionists@mailman.srv.cs.cmu.edu <connectionists@mailman.srv.cs.cmu.edu><br>
<b>Subject:</b> RE: Connectionists: Early history of symbolic and neural network approaches to AI</span><span lang="EN-GB" style="font-size:11.0pt">
</span><span lang="EN-CA"><o:p></o:p></span></p>
<div>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt"> </span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
</div>
<p style="margin:0in"><span lang="EN-GB">So you believe that, as you wrote out these words, the neural networks in your brain did not deal with symbols?</span><span lang="EN-CA"><o:p></o:p></span></p>
<p style="margin:0in"><span lang="EN-GB"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<div style="border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0in 0in 0in">
<p style="margin:0in"><b><span lang="EN-GB">From:</span></b><span lang="EN-GB"> Connectionists <connectionists-bounces@mailman.srv.cs.cmu.edu>
<b>On Behalf Of </b>Weng, Juyang<br>
<b>Sent:</b> Monday, February 19, 2024 8:07 AM<br>
<b>To:</b> connectionists@mailman.srv.cs.cmu.edu<br>
<b>Subject:</b> Connectionists: Early history of symbolic and neural network approaches to AI</span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
<p style="margin:0in"><span lang="EN-GB"> </span><span lang="EN-CA"><o:p></o:p></span></p>
<p style="margin:0in"><span lang="EN-GB" style="color:black">I do not agree with <span style="background:white">
Newell and Simon</span> if they wrote that. Otherwise, images and video are also symbols. They probably were not sophisticated enough in 1976 to realize why neural networks in the brain should not contain or deal with symbols.</span><span lang="EN-CA"><o:p></o:p></span></p>
<p style="margin-bottom:12.0pt"><span lang="EN-GB"> </span><span lang="EN-CA"><o:p></o:p></span></p>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</body>
</html>