Connectionists: 2024 Nobel Prize in Physics goes to Hopfield and Hinton
Juergen Schmidhuber
juergen.schmidhuber at kaust.edu.sa
Mon Oct 14 10:03:25 EDT 2024
To reach a broader audience, I made a popular tweet (see below) on the 2024 Physics Nobel Prize, which unfortunately isn't really about physics, but rewards plagiarism and misattribution in Artificial Intelligence (AI). In fact, AI has a major problem that's now even invading other fields (such as physics): the wrong people are credited for inventions of others. Some people have lost their titles or jobs due to plagiarism, e.g., Harvard's former president, but how can advisors now continue to tell their students that they should avoid plagiarism at all costs?
Of course, it is well known that plagiarism can be either "unintentional" or "intentional or reckless" [PLAG1-6], and the more innocent of the two may very well be partially the case here. But science has a well-established way of dealing with "multiple discovery" and plagiarism, be it unintentional [PLAG1-6][CONN21] or not [FAKE2], based on facts such as time stamps of publications and patents. The deontology of science requires that unintentional plagiarists correct their publications through errata and then credit the original sources properly in the future. The awardees didn't; instead they kept collecting citations for inventions of other researchers [DLP].
Original Nobel Foundation tweet:
https://x.com/NobelPrize/status/1843589140455272810
My response (now 1/7th as popular as the original):
https://x.com/SchmidhuberAI/status/1844022724328394780
Here is the text of the tweet:
The Nobel Prize in Physics 2024 for Hopfield & Hinton rewards plagiarism and incorrect attribution in computer science. It's mostly about Amari's "Hopfield network" and the "Boltzmann Machine."
1. The Lenz-Ising recurrent architecture with neuron-like elements was published in 1925 [L20][I24][I25]. In 1972, Shun-Ichi Amari made it adaptive such that it could learn to associate input patterns with output patterns by changing its connection weights [AMH1]. However, Amari is only briefly cited in the "Scientific Background to the Nobel Prize in Physics 2024." Unfortunately, Amari's net was later called the "Hopfield network." Hopfield republished it 10 years later [AMH2], without citing Amari, not even in later papers.
2. The related Boltzmann Machine paper by Ackley, Hinton, and Sejnowski (1985)[BM] was about learning internal representations in hidden units of neural networks (NNs)[S20]. It didn't cite the first working algorithm for deep learning of internal representations by Ivakhnenko & Lapa (Ukraine, 1965)[DEEP1-2][HIN]. It didn't cite Amari's separate work (1967-68)[GD1-2] on learning internal representations in deep NNs end-to-end through stochastic gradient descent (SGD). Not even the later surveys by the authors [S20][DL3][DLP] nor the "Scientific Background to the Nobel Prize in Physics 2024" mention these origins of deep learning. ([BM] also did not cite relevant prior work by Sherrington & Kirkpatrick [SK75] & Glauber [G63].)
3. The Nobel Committee also lauds Hinton et al.'s 2006 method for layer-wise pretraining of deep NNs (2006)[UN4]. However, this work neither cited the original layer-wise training of deep NNs by Ivakhnenko & Lapa (1965)[DEEP1-2] nor the original work on unsupervised pretraining of deep NNs (1991)[UN0-1][DLP].
4. The "Popular information" says: “At the end of the 1960s, some discouraging theoretical results caused many researchers to suspect that these neural networks would never be of any real use." However, deep learning research was obviously alive and kicking in the 1960s-70s, especially outside of the Anglosphere [DEEP1-2][GD1-3][CNN1][DL1-2][DLP][DLH].
5. Many additional cases of plagiarism and incorrect attribution can be found in the following reference [DLP], which also contains the other references above. One can start with Sec. 3:
[DLP] J. Schmidhuber (2023). How 3 Turing awardees republished key methods and ideas whose creators they failed to credit. Technical Report IDSIA-23-23, Swiss AI Lab IDSIA, 14 Dec 2023. https://people.idsia.ch/~juergen/ai-priority-disputes.html
See also the following reference [DLH] for a history of the field:
[DLH] J. Schmidhuber (2022). Annotated History of Modern AI and Deep Learning. Technical Report IDSIA-22-22, IDSIA, Lugano, Switzerland, 2022. Preprint arXiv:2212.11279. https://people.idsia.ch/~juergen/deep-learning-history.html (This extends the 2015 award-winning survey https://people.idsia.ch/~juergen/deep-learning-overview.html)
End of tweet.
PS:
6. The "Scientific Background to the Nobel Prize in Physics 2024" briefly cites the important work of Nakano (1972)[NAK72][DLH], a former collaborator of Amari. Nakano also had a recurrent associative memory, but it wasn't the "Hopfield network" first published by Amari (1972)[AMH1][DLH]. Remarkably, Hopfield [AMH2] was aware of Amari: he cites Amari's _later_ papers on the separate topic of self-organisation in NNs (1977, 1978), but ignores his crucial 1972 paper [AMH1].
7. The well-known backpropagation technique (Linnainmaa, 1970) [BP1-5][BPA-C][DLP] is an efficient way of applying the chain rule (Leibniz, 1676) to big networks with differentiable nodes [BP4]. It is much more important for modern AI than the so-called "Hopfield network" and the "Boltzmann Machine." Backpropagation was also mentioned in the recent debate, although the Nobel Prize focuses on other things (otherwise the subsequent outcry would have been even greater). By 1985, compute was about 1,000 times cheaper than in 1970 [BP1], and the first desktop computers became accessible in wealthier academic labs. An experimental analysis of the known method [BP1-2] by Rumelhart et al. then demonstrated that backpropagation can yield useful internal representations in hidden layers of NNs [RUM].
Hinton claimed that his co-author Rumelhart invented backpropagation. The "Scientific Background to the Nobel Prize in Physics 2024," however, correctly cites Linnainmaa (1970), who first published it, and Werbos (1982), who first applied it to NNs [DLP][DLH][DL1]. It does NOT mention, however, that even later surveys by Hinton failed to cite the original work by Linnainmaa [DLP]. Reference [BP4] has a compact history of backpropagation [BP4][DLH][DLP]: https://people.idsia.ch/~juergen/who-invented-backpropagation.html
Note that Google Scholar (by Hinton's former employer) hallucinates 55k citations for a 1986 backpropagation paper by Rumelhart et al., simply adding 28k citations for the book in which it appeared.
8. Hinton's Boltzmann Machine co-author Sejnowski claims [S20]: "In 1969, Minsky & Papert [M69] showed that shallow NNs without hidden layers are very limited and the field was abandoned until a new generation of neural network researchers took a fresh look at the problem in the 1980s." In an interview, Sejnowski also claimed: "Our goal was to try to take a network with multiple layers—an input layer, an output layer and layers in between—and make it learn. It was generally thought, because of early work that was done in AI in the 60s, that no one would ever find such a learning algorithm because it was just too mathematically difficult." However, the 1969 book [M69] addressed a "problem" of Gauss & Legendre's shallow linear NNs (circa 1800)[DL1-2][DLH] that had already been solved 4 years prior by Ivakhnenko & Lapa's popular deep learning method (1965) [DEEP1-2][DL2], and then also by Amari's SGD for MLPs (1967)[GD1-2][DLH]. Minsky (who is cited by the "Scientific Background to the Nobel Prize in Physics 2024") was apparently unaware of this and failed to correct it later [DLH][HIN][DLP].
Sejnowski and Hinton have never cited the origins of deep learning in Ukraine and Japan in the 1960s and 1970s. None of the important algorithms for modern AI were invented by them.
Prof. Juergen Schmidhuber
Co-Chair, KAUST Center of Generative AI
Scientific Director, Swiss AI Lab IDSIA
Co-Founder & ex-President, NNAISENSE
Board Member, Delvitech
CV: http://www.idsia.ch/~juergen/cv.html
> On 8. Oct 2024, at 12:19, Barak A. Pearlmutter <barak at pearlmutter.net> wrote:
>
> The Nobel Prize in Physics 2024 was awarded to John J. Hopfield and
> Geoffrey E. Hinton "for foundational discoveries and inventions that
> enable machine learning with artificial neural networks"
>
> https://www.nobelprize.org/prizes/physics/2024/summary/
>
> Congratulations, and well deserved!
More information about the Connectionists
mailing list