<html xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Aptos;
panose-1:2 11 0 4 2 2 2 2 2 4;}
@font-face
{font-family:"Times New Roman \(Body CS\)";
panose-1:2 11 6 4 2 2 2 2 2 4;}
@font-face
{font-family:Times;
panose-1:2 11 6 4 2 2 2 2 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
font-size:10.0pt;
font-family:"Aptos",sans-serif;}
h4
{mso-style-priority:9;
mso-style-link:"Heading 4 Char";
margin:0in;
text-align:justify;
page-break-after:avoid;
font-size:12.0pt;
font-family:Times;
letter-spacing:1.0pt;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
span.Heading4Char
{mso-style-name:"Heading 4 Char";
mso-style-priority:9;
mso-style-link:"Heading 4";
font-family:"Times New Roman",serif;
color:#0F4761;
mso-ligatures:none;
font-style:italic;}
span.EmailStyle22
{mso-style-type:personal-reply;
font-family:"Aptos",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;
mso-ligatures:none;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style>
</head>
<body lang="EN-US" link="blue" vlink="purple" style="word-wrap:break-word">
<div class="WordSection1">
<p class="MsoNormal"><span style="font-size:18.0pt;font-family:"Arial",sans-serif">Dear Connectionists colleagues,<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:18.0pt;font-family:"Arial",sans-serif"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:18.0pt;font-family:"Arial",sans-serif">I am writing to announce the publication of my new article that describes psychological and neurobiological properties of neural network models that are composed of spiking neurons.
The article is:<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:18.0pt;font-family:"Arial",sans-serif"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:18.0pt;font-family:"Arial",sans-serif">Grossberg, S. (2025). Spiking neural network models of neurons and networks for perception, learning, cognition, and navigation: A review.
<i>Brain Sciences</i>, 15(8), 870.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:18.0pt;font-family:"Arial",sans-serif">
<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:18.0pt;font-family:"Arial",sans-serif"> <a href="https://www.mdpi.com/2076-3425/15/8/870" title="https://www.mdpi.com/2076-3425/15/8/870">https://www.mdpi.com/2076-3425/15/8/870</a><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:18.0pt;font-family:"Arial",sans-serif"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:18.0pt;font-family:"Arial",sans-serif">The Abstract of the article summarizes some of its high points:<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:18.0pt;font-family:"Arial",sans-serif"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:18.0pt;font-family:"Arial",sans-serif">This article reviews and synthesizes highlights of the history of neural models of rate-based and spiking neural networks. It explains that theoretical and experimental results
about how <i>all</i> rate-based neural network models, whose cells obey the membrane equations of neurophysiology, also called shunting laws, can be converted into spiking neural network models without any loss of explanatory power, and often with gains in
explanatory power. These results are relevant to all the main brain processes, including individual neurons and networks for perception, learning, cognition, and navigation. The results build upon the hypothesis that the functional units of brain processes
are spatial patterns of cell activities, or short-term-memory (STM) traces, and spatial patterns of learned adaptive weights, or long-term-memory (LTM) patterns. It is also shown how spatial patterns that are learned by spiking neurons during childhood can
be preserved even as the child’s brain grows and deforms while it develops towards adulthood. Indeed, this property of spatiotemporal self-similarity may be one of the most powerful properties that individual spiking neurons contribute to the development of
large-scale neural networks and architectures throughout life.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:18.0pt;font-family:"Arial",sans-serif"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:18.0pt;font-family:"Arial",sans-serif"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:18.0pt"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt">Get <a href="https://aka.ms/GetOutlookForMac">
Outlook for Mac </a></span><span style="font-size:12.0pt"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt"><o:p> </o:p></span></p>
<div id="mail-editor-reference-message-container">
<div>
<div>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal" style="margin-bottom:12.0pt"><b><span style="font-size:12.0pt;color:black">From:
</span></b><span style="font-size:12.0pt;color:black">Grossberg, Stephen <steve@bu.edu><br>
<b>Date: </b>Wednesday, July 30, 2025 at 9:42</span><span style="font-size:12.0pt;font-family:"Arial",sans-serif;color:black"> </span><span style="font-size:12.0pt;color:black">AM<br>
<b>To: </b>connectionists@cs.cmu.edu <connectionists@cs.cmu.edu><br>
<b>Cc: </b>Stephen Grossberg <steve@cns.bu.edu><br>
<b>Subject: </b>Re: From ChatGPT to Artificial General Intelligence: A Neural Network Model of How our Brains Learn Large Language Models and their Meanings
<o:p></o:p></span></p>
</div>
<div>
<div id="mail-editor-reference-message-container">
<div>
<div>
<div>
<h4><span style="font-size:14.0pt;font-family:"Arial",sans-serif;font-weight:normal">Dear Connectionists Colleagues,</span><o:p></o:p></h4>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:Times"> </span><o:p></o:p></p>
<p class="MsoNormal"><span style="font-size:14.0pt;font-family:"Arial",sans-serif">I am happy to announce the publication of my article that describes a neural network model of how humans learn large language models and their meanings, while providing a blueprint
of how to achieve Artificial General Intelligence.</span><o:p></o:p></p>
<p class="MsoNormal"><span style="font-size:14.0pt;font-family:"Arial",sans-serif"> </span><o:p></o:p></p>
<p class="MsoNormal"><span style="font-size:14.0pt;font-family:"Arial",sans-serif">These results show how to solve foundational problems of, and go beyond, models like Deep Learning and the LLMs promoted by OpenAI and other companies.</span><o:p></o:p></p>
<p class="MsoNormal"><span style="font-size:14.0pt;font-family:"Arial",sans-serif"> </span><o:p></o:p></p>
<p class="MsoNormal"><span style="font-size:14.0pt;font-family:"Arial",sans-serif">The article is:</span><o:p></o:p></p>
<p class="MsoNormal"><span style="font-size:14.0pt;font-family:"Arial",sans-serif"> </span><o:p></o:p></p>
<p class="MsoNormal"><span style="font-size:14.0pt;font-family:"Arial",sans-serif">Neural Network Models of Autonomous Adaptive Intelligence and Artificial General Intelligence: How our Brains Learn Large Language Models and their Meanings.
</span><o:p></o:p></p>
<p class="MsoNormal"><span style="font-size:14.0pt;font-family:"Arial",sans-serif">Frontiers in Systems Neuroscience, 29 July 2025, Volume 19</span><o:p></o:p></p>
<p class="MsoNormal"><span style="font-size:14.0pt;font-family:"Arial",sans-serif"><a href="https://www.frontiersin.org/journals/systems-neuroscience/articles/10.3389/fnsys.2025.1630151/full">https://www.frontiersin.org/journals/systems-neuroscience/articles/10.3389/fnsys.2025.1630151/full</a></span><o:p></o:p></p>
<p class="MsoNormal"><span style="font-size:14.0pt;font-family:"Arial",sans-serif"> </span><o:p></o:p></p>
<p class="MsoNormal"><span style="font-size:14.0pt;font-family:"Arial",sans-serif">The article Abstract illustrates its scope:</span><o:p></o:p></p>
<p class="MsoNormal"><span style="font-size:14.0pt;font-family:"Arial",sans-serif"> </span><o:p></o:p></p>
<p class="MsoNormal"><span style="font-size:14.0pt;font-family:"Arial",sans-serif">“This article describes a biological neural network model that explains how humans learn to understand large language models and their meanings. This kind of learning typically
occurs when a student learns from a teacher about events that they experience together. Multiple types of self-organizing brain processes are involved, including content-addressable memory; conscious visual perception; joint attention; object learning, categorization,
and cognition; conscious recognition; cognitive working memory; cognitive planning; neural-symbolic computing; emotion; cognitive-emotional interactions and reinforcement learning; volition; and goal-oriented actions. The article advances earlier results showing
how small language models are learned that have perceptual and affective meanings. The current article explains how humans, and neural network models thereof, learn to consciously see and recognize an unlimited number of visual scenes. Then, bi-directional
associative links can be learned and stably remembered between these scenes, the emotions that they evoke, and the descriptive language utterances associated with them. Adaptive resonance theory circuits control model learning and self-stabilizing memory.
These human capabilities are not found in AI models such as ChatGPT. The current model is called ChatSOME, where SOME abbreviates Self-Organizing MEaning. The article summarizes neural network highlights since the 1950s and leading models, including adaptive
resonance, deep learning, LLMs, and transformers.”</span><o:p></o:p></p>
<p class="MsoNormal"><span style="font-size:14.0pt;font-family:"Arial",sans-serif"> </span><o:p></o:p></p>
<p class="MsoNormal"><span style="font-size:14.0pt;font-family:"Arial",sans-serif">Best to all,</span><o:p></o:p></p>
<p class="MsoNormal"><span style="font-size:14.0pt;font-family:"Arial",sans-serif"> </span><o:p></o:p></p>
<p class="MsoNormal"><span style="font-size:14.0pt;font-family:"Arial",sans-serif">Stephen Grossberg</span><o:p></o:p></p>
<p class="MsoNormal"><span style="font-size:14.0pt;font-family:"Arial",sans-serif">sites.bu.edu/steveg</span><o:p></o:p></p>
<p class="MsoNormal"><span style="font-size:14.0pt;font-family:"Arial",sans-serif"> </span><o:p></o:p></p>
<p class="MsoNormal"><span style="font-size:14.0pt;font-family:"Arial",sans-serif"> </span><o:p></o:p></p>
<div id="mail-editor-reference-message-container">
<div>
<div>
<div>
<p class="MsoNormal" style="margin-bottom:12.0pt"> <o:p></o:p></p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</body>
</html>