<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=Windows-1252">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<!--[if !mso]><style>v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style><![endif]--><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:"Times New Roman \(Body CS\)";
panose-1:2 11 6 4 2 2 2 2 2 4;}
@font-face
{font-family:Aptos;
panose-1:2 11 0 4 2 2 2 2 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
font-size:10.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
span.EmailStyle24
{mso-style-type:personal-reply;
font-family:"Arial",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;
mso-ligatures:none;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="EN-US" link="blue" vlink="purple" style="word-wrap:break-word">
<div class="WordSection1">
<p class="MsoNormal"><span style="font-size:18.0pt;font-family:"Arial",sans-serif">Dear Connectionists colleagues,<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:18.0pt;font-family:"Arial",sans-serif"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:18.0pt;font-family:"Arial",sans-serif">Here are some short summaries of the history of neural network discoveries, as I experienced it, that are relevant to the recent Nobel Prizes to Hopfield and Hinton:<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:18.0pt;font-family:"Arial",sans-serif">++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:18.0pt;font-family:"Arial",sans-serif"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:18.0pt;font-family:"Arial",sans-serif"><img width="822" height="612" style="width:8.5625in;height:6.375in" id="Picture_x0020_1" src="cid:image001.png@01DB22B7.38248A80" alt="A diagram of a scientific experiment
Description automatically generated with medium confidence"></span><span style="font-size:18.0pt;font-family:"Arial",sans-serif"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:18.0pt;font-family:"Arial",sans-serif"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:18.0pt;font-family:"Arial",sans-serif">THE NOBEL PRIZES IN PHYSICS TO HOPFIELD AND HINTON<br>
FOR MODELS THEY DID NOT DISCOVER: THE CASE OF HOPFIELD<br>
<br>
Here I will summarize my concerns about the Hopfield award.<br>
<br>
I published articles in 1967 – 1972 in the Proceedings of the National Academy of Sciences that introduced the Additive Model that Hopfield used in 1984. My articles proved global theorems about the limits and oscillations of my Generalized Additive Models.
See <a href="http://sites.bu.edu/steveg" target="_self">sites.bu.edu/steveg</a> for these articles.
<br>
<br>
For example:<br>
<br>
Grossberg, S. (1971). Pavlovian pattern learning by nonlinear neural networks. Proceedings of the National Academy of Sciences, 68, 828-831.
<br>
<a href="https://lnkd.in/emzwx4Tw" target="_self">https://lnkd.in/emzwx4Tw</a><br>
<br>
This article illustrates that my mathematical results were part of a research program to develop biological neural networks that provide principled mechanistic explanations of psychological and neurobiological data.<br>
<br>
Later, Michael Cohen and I published a Liapunov function that included the Additive Model and generalizations thereof in 1982 and 1983 before Hopfield (1984) appeared.
<br>
<br>
For example,<br>
<br>
Cohen, M.A. and Grossberg, S. (1983). Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Transactions on Systems, Man, and Cybernetics, SMC-13, 815-826.
<br>
<a href="https://lnkd.in/eAFAdvbu" target="_self">https://lnkd.in/eAFAdvbu</a><br>
<br>
I was told that Hopfield knew about my work before he published his 1984 article, without citation.<br>
<br>
Recall that I started my neural networks research in 1957 as a Freshman at Dartmouth College.<br>
<br>
That year, I introduced the biological neural network paradigm, as well as the short-term memory (STM), medium-term memory (MTM), and long-term memory (LTM) laws that are used to this day, including in the Additive Model, to explain data about how brains make
minds.<br>
<br>
See the review in <a href="https://lnkd.in/gJZJtP_W" target="_self">https://lnkd.in/gJZJtP_W</a> .<br>
<br>
When I started in 1957, I knew no one else who was doing neural networks. That is why my colleagues call me the Father of AI.<br>
<br>
I then worked hard to create a neural networks community, notably a research center, academic department, the International Neural Network Society, the journal Neural Networks, multiple international conferences on neural networks, and Boston-area research
centers, while training over 100 gifted PhD students, postdocs, and faculty to do neural network research. See the Wikipedia page.<br>
<br>
That is why I did not have time or strength to fight for priority of my models.<br>
<br>
Recently, I was able to provide a self-contained and non-technical overview and synthesis of some of my scientific discoveries since 1957, as well as explanations of the work of many other scientists, in my 2021 Magnum Opus<br>
<br>
Conscious Mind, Resonant Brain: How Each Brain Makes a Mind<br>
<br>
<a href="https://lnkd.in/eiJh4Ti" target="_self">https://lnkd.in/eiJh4Ti</a><br>
<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:22.0pt;font-family:"Arial",sans-serif">++++++++++++++++++++++++++++++++++++++++++++++++++++<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:18.0pt;font-family:"Arial",sans-serif">THE NOBEL PRIZES IN PHYSICS TO HOPFIELD AND HINTON<br>
FOR MODELS THEY DID NOT DISCOVER: THE CASE OF HINTON<br>
<br>
Here I summarize my concerns about the Hinton award.<br>
<br>
Many authors developed Back Propagation (BP) before Hinton; e.g., Amari (1967), Werbos (1974), Parker (1982), all before Rumelhart, Hinton, & Williams (1986).<br>
<br>
BP has serious computational weaknesses:<br>
<br>
It is UNTRUSTWORTHY (because it is UNEXPLAINABLE).<br>
<br>
It is UNRELIABLE (because it can experience CATASTROPHIC FORGETTING.<br>
<br>
It should thus never be used in financial or medical applications.<br>
<br>
BP learning is also SLOW and uses non-biological NONLOCAL WEIGHT TRANSPORT.<br>
<br>
See Figure, right column, top.<br>
<br>
In 1988, I published 17 computational problems of BP:<br>
<a href="https://lnkd.in/erKJvXFA" target="_self">https://lnkd.in/erKJvXFA</a><br>
<br>
BP gradually grew out of favor because other models were better.<br>
<br>
Later, huge online databases and supercomputers enabled Deep Learning to use BP to learn.<br>
<br>
My 1988 article contrasted BP with Adaptive Resonance Theory (ART) which I first published in 1976:<br>
<a href="https://lnkd.in/evkfq22G" target="_self">https://lnkd.in/evkfq22G</a><br>
<br>
See Figure, right column, bottom.<br>
<br>
ART never had BP’s problems.<br>
<br>
ART is now the most advanced cognitive and neural theory that explains HOW HUMANS LEARN TO ATTEND, RECOGNIZE, and PREDICT events in a changing world.
<br>
<br>
ART also explains and simulates data from hundreds of psychological and neurobiological experiments.<br>
<br>
In 1980, I derived ART from a THOUGHT EXPERIMENT about how ANY system can AUTONOMOUSLY learn to correct predictive errors in a changing world:<br>
<a href="https://lnkd.in/eGWE8kJg" target="_self">https://lnkd.in/eGWE8kJg</a><br>
<br>
The thought experiment derives ART from a few facts of life that do not mention mind or brain.<br>
<br>
ART is thus a UNIVERSAL solution of the problem of autonomous error correction in a changing world.<br>
<br>
That is why ART models can be used in designs for AUTONOMOUS ADAPTIVE INTELLIGENCE in engineering, technology, and AI.<br>
<br>
ART also proposes a solution of the classical MIND-BODY PROBLEM:<br>
<br>
HOW, WHERE in our brains, and WHY from a deep computational perspective, we CONSCIOUSLY SEE, HEAR, FEEL, and KNOW about the world, and use our conscious states to PLAN and ACT to realize VALUED GOALS.<br>
<br>
For details, see<br>
<br>
Conscious Mind, Resonant Brain: How Each Brain Makes a Mind<br>
<br>
<a href="https://lnkd.in/eiJh4Ti" target="_self">https://lnkd.in/eiJh4Ti</a><br>
<br>
<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:18.0pt;font-family:"Arial",sans-serif">+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:18.0pt;font-family:"Arial",sans-serif"><o:p> </o:p></span></p>
</div>
</body>
</html>