Connectionists: 2024 Nobel Prize in Physics goes to Hopfield and Hinton: History of relevant discoveries before Hopfield
Grossberg, Stephen
steve at bu.edu
Tue Oct 8 21:14:24 EDT 2024
Dear Steve,
Thanks for your detailed reply.
I believe that what I wrote about the 1974 work of Paul Werbos, which also developed computational examples, gives him modern priority for back propagation.
Mentioning lots of other applications does not diminish Paul’s priority.
What Paul did not do well was to market his work.
But good marketing does not imply priority, just as Voltaire’s “marketing” of the discoveries of Newton on the Continent did not diminish Newton’s priority. Just Google to find the following quote:
“Yes, Voltaire played a significant role in spreading Isaac Newton's scientific ideas across the European continent, particularly through his writings like "Letters from England" which highlighted Newton's work and helped popularize Newtonian physics in France and beyond.”
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
I also am concerned about the Hopfield award because, again, the Nobel Prize committee does not seem to know the scientific history behind that.
I published several articles in 1967 – 1972 in the Proceedings of the National Academy of Sciences that introduced the Additive Model that Hopfield used in 1984. This work proved global theorems about the limits and oscillations of my Generalized Additive Models. See sites.bu.edu/steveg for downloadable versions of these articles. For example:
Grossberg, S. (1971). Pavlovian pattern learning by nonlinear neural networks. Proceedings of the National Academy of Sciences, 68, 828-831.
https://sites.bu.edu/steveg/files/2016/06/Gro1971ProNatAcaSci.pdf
You will note from the title of this article that the model arose from my discovery and development of biological neural networks to provide principled mechanistic explanations of increasingly large psychological and neurobiological data bases. One of my early loves were animal conditioning and associative learning, to whose clarification the above article, one in a series, contributed.
Building on these discoveries, Michael Cohen and I published a Liapunov function that included the Additive Model, Shunting Model, and major generalizations thereof in 1982 and 1983 before the 1984 Hopfield article appeared. For example,
Cohen, M.A. and Grossberg, S. (1983). Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Transactions on Systems, Man, and Cybernetics, SMC-13, 815-826.
https://sites.bu.edu/steveg/files/2016/06/CohGro1983IEEE.pdf
Significantly, we proved mathematically that our generalized Liapunov function worked in all cases.
This 1983 article was part of a series of earlier articles in which I proved theorems about the limiting and oscillatory behavior of increasingly general neural networks that arose in multiple kinds of applications.
One does not pull such a result out of a hat.
In this regard, I was told that Hopfield knew about my work before he published his 1984 article.
I was also alerted before his publication by colleagues of mine that Hopfield had just presented a public lecture about my model, declaring it a major breakthrough.
At that time in my life, I was trying to manage a storm of creative ideas that thrust me into multiple topics of neural networks. One example, was my 1978 Theory of Human Memory that laid out a major research program that took several decades to fully develop with multiple collaborators.
Please keep in mind that I started my research in neural networks in 1957 as a Freshman at Dartmouth College when I was taking Introductory Psychology.
That year, I introduced the paradigm of using nonlinear systems of neural networks, as well as the short-term memory (STM), medium-term memory (MTM), and long-term memory (LTM) laws that are used in one specialization or another to this day, including in the Additive Model, to computationally explain data about how our brains make our minds. See the historical summary in https://en.wikipedia.org/wiki/Stephen_Grossberg .
I was not thinking about priority. I was trying to develop the latest in a steady stream of new ideas.
When I started in 1957, I knew no one else who was doing neural networks. That is why various of my colleagues call me the Father of AI.
Because I was intellectually lonely, and wanted others to enjoy and be able to contribute to what I viewed as a paradigm-shifting breakthrough, I felt strongly the need to help create a neural networks community.
That challenge thrust me into founding infrastructure for the field of neural networks, including a research center, academic department, the International Neural Network Society, the journal Neural Networks, multiple international conferences on neural networks, and Boston-area research centers, while I began training over 100 gifted PhD students, postdocs, and faculty to do neural network research. See the Wikipedia page for a summary.
I did not feel that I had the strength to fight priority battles on top of all my educational, administrative, and leadership responsibilities.
As some of you know, I was recently able to provide a self-contained and non-technical overview and synthesis of some of my scientific discoveries since 1957, as well as explanations of the work of hundreds of other scientists, in my 2021 Magnum Opus
Conscious Mind, Resonant Brain: How Each Brain Makes a Mind
https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552
I was grateful to learn that the book won the 2022 PROSE book award in Neuroscience of the Association of American Publishers.
Best to all,
Steve
From: Brad Wyble <bwyble at gmail.com>
Date: Tuesday, October 8, 2024 at 7:56 PM
To: Stephen José Hanson <jose at rubic.rutgers.edu>
Cc: Grossberg, Stephen <steve at bu.edu>, Jonathan D. Cohen <jdc at princeton.edu>, Connectionists <connectionists at cs.cmu.edu>
Subject: Re: Connectionists: 2024 Nobel Prize in Physics goes to Hopfield and Hinton
We really can trace the current AI boom back to John Carmack who wrote Doom, which ushered in the era of GPU-hungry computing. Credit where it's due please.
On Tue, Oct 8, 2024 at 4:10 PM Stephen José Hanson <jose at rubic.rutgers.edu<mailto:jose at rubic.rutgers.edu>> wrote:
Hi Steve,
The problem every writer encounters is what can be concluded as resolved knowledge rather then new/novel knowledge. In the law this is of course “legal precedence”, so does the reference refer to a recent precedent, or does the one for the 17th century hold precedence? In the present case, I agree that calculating gradients of functions using the chain rule was invented (Legendre -- Least squares) far before Rumelhart and Hinton applied it to error gradients in acyclic/cyclic networks, and of course there were others as you say, in the 20th century that also applied error gradient to networks (Parker, Le cun et al). Schmidhuber says all that matters is the “math” not the applied context. However, I seriously doubt that Legendre could have imagined using gradients of function error through succesive application in a acylic network would have produced a hierarchical kinship relationship (distinguishing between an italian and english family mother, fathers, sons, aunts, grandparents etc.) in the hidden units of a network, simply by observing individuals with fixed feature relations. I think any reasonable person would maintain that this application is completely novel and could not be predicted in or out of context from the “math” and certainly not from the 18th century. Hidden units were new in this context and their representational nature was novel, in this context. Scope of reference is also based on logical or causal proximity to the reference. In this case, referencing Darwin or Newton in all biological or physics papers should be based on the outcome of the metaphorical test of whether the recent results tie back to original source in some direct line, for example, was Oswald’s grandfather responsible for the death of President John F. Kennedy? Failing this test, suggests that the older reference may not have scope. But of course this can be subjective.
Steve
On 10/8/24 2:38 PM, Grossberg, Stephen wrote:
Actually, Paul Werbos developed back propagation into its modern form, and worked out computational examples, for his 1974 Harvard PhD thesis.
Then David Parker rediscovered it in 1982, etc.
Schmidhuber provides an excellent and wide-ranging history of many contributors to Deep Learning and its antecedents:
https://www.sciencedirect.com/science/article/pii/S0893608014002135?casa_token=k47YCzFwcFEAAAAA:me_ZGF5brDqjRihq5kHyeQBzyUMYBypJ3neSinZ-cPn1pnyi69DGyM9eKSyLsdiRf759I77c7w
This article has been cited over 23,000 times.
From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu><mailto:connectionists-bounces at mailman.srv.cs.cmu.edu> on behalf of Stephen José Hanson <jose at rubic.rutgers.edu><mailto:jose at rubic.rutgers.edu>
Date: Tuesday, October 8, 2024 at 2:25 PM
To: Jonathan D. Cohen <jdc at princeton.edu><mailto:jdc at princeton.edu>, Connectionists <connectionists at cs.cmu.edu><mailto:connectionists at cs.cmu.edu>
Subject: Re: Connectionists: 2024 Nobel Prize in Physics goes to Hopfield and Hinton
Yes, Jon good point here, and although there is a through line from Hopfield to Hinton and Sejnowski.. Ie boltzmann machines and onto DL and LLMs
Dave of course invented BP, Geoff would always say.. his contribution was to try and talk Dave out of it as it had so many computational problems and could be in no way considered biologically plausible.
Steve
On 10/8/24 8:47 AM, Jonathan D. Cohen wrote:
I’d like to add, in this context, a note in memoriam of David Rumelhart, who was an integral contributor to the work honored by today’s Nobel Prize.
jdc
--
Stephen José Hanson
Professor, Psychology Department
Director, RUBIC (Rutgers University Brain Imaging Center)
Member, Executive Committee, RUCCS
--
Stephen José Hanson
Professor, Psychology Department
Director, RUBIC (Rutgers University Brain Imaging Center)
Member, Executive Committee, RUCCS
--
Brad Wyble (he/him)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20241009/6f7bcad2/attachment.html>
More information about the Connectionists
mailing list