Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc.

Asim Roy ASIM.ROY at asu.edu
Tue Nov 16 02:17:20 EST 2021


For some perspective, the history of awards and prizes is replete with similar stories. George Dantzig, the godfather of linear programming, didn't get the economics Nobel prize in 1975 along with Koopmans and Kantorovich. Even Koopmans and Kantorovich were surprised why Dantzig was not included. Here's a quote from Dantzig's profile: https://www.informs.org/Explore/History-of-O.R.-Excellence/Biographical-Profiles/Dantzig-George-B



"In 1975 Tjalling Koopmans and Leonid Kantorovich were awarded the Nobel Prize in Economics for their contribution in resource allocation and linear programming. Many professionals, Koopmans and Kantorovich included, were surprised at Dantzig's exclusion as an honoree. Most individuals familiar with the situation considered him to be just as worthy of the prize."

I read somewhere that Kantorovich was hesitant about accepting the prize and called Kenneth Arrow, who had won the Nobel in 1972. If I remember correctly, Arrow's advice to Kantorovich was something like this: "Just take it. You can't do anything about Dantzig not getting it." And, by the way, both Dantzig and Arrow were at Stanford at that time.



Here's a footnote from the same bio:



" (Unbeknownst to Dantzig and most other operations researchers in the West, a similar method was derived eight years prior by Soviet mathematician Leonid V. Kantorovich)"



Asim Roy

Arizona State University





-----Original Message-----
From: Connectionists connectionists-bounces at mailman.srv.cs.cmu.edu<mailto:connectionists-bounces at mailman.srv.cs.cmu.edu> On Behalf Of Schmidhuber Juergen
Sent: Sunday, November 14, 2021 9:48 AM
To: connectionists at cs.cmu.edu<mailto:connectionists at cs.cmu.edu>
Subject: Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc.



Dear all, thanks for your public comments, and many additional private ones!



So far nobody has challenged the accuracy of any of the statements in the draft report currently under massive open peer review:



https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scientific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ!IaFBkZn1WoaP06s-6kQU-hsGXGLHSoT9wZdNR8Ut7P5YNKGE62JhlbvFXe5hs0s$<https://urldefense.com/v3/__https:/people.idsia.ch/*juergen/scientific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ!IaFBkZn1WoaP06s-6kQU-hsGXGLHSoT9wZdNR8Ut7P5YNKGE62JhlbvFXe5hs0s$>



Nevertheless, some of the recent comments will trigger a few minor revisions in the near future.



Here are a few answers to some of the public comments:



Randall O'Reilly wrote: "I vaguely remember someone making an interesting case a while back that it is the *last* person to invent something that gets all the credit." Indeed, as I wrote in Science (2011, reference [NASC3] in the report): "As they say: Columbus did not become famous because he was the first to discover America, but because he was the last." Sure, some people sometimes assign the "inventor" title to the person that should be truly called the "popularizer." Frequently, this is precisely due to the popularizer packaging the work of others in such a way that it becomes easily digestible. But this is not to say that their receipt of the title is correct or that we shouldn't do our utmost to correct it; their receipt of such title over the ones that are actually deserving of it is one of the most enduring issues in scientific history.



As Stephen José Hanson wrote: "Well, to  popularize is not to invent. Many of Juergen's concerns could be solved with some scholarship, such that authors look sometime before 2006 for other relevant references."



Randy also wrote: "Sometimes, it is not the basic equations etc that matter: it is the big picture vision." However, the same vision has almost always been there in the earlier work on neural nets. It's just that the work was ahead of its time. It's only in recent years that we have the datasets and the computational power to realize those big pictures visions. I think you would agree that simply scaling something up isn't the same as inventing it. If it were, then the name "Newton" would have little meaning to people nowadays.



Jonathan D. Cohen wrote: " ...it is also worth noting that science is an *intrinsically social* endeavor, and therefore communication is a fundamental factor." Sure, but let's make sure that this cannot be used as a justification of plagiarism! See Sec. 5 of the report.



Generally speaking, if B plagiarizes A but inspires C, whom should C cite? The answer is clear.



Ponnuthurai Nagaratnam Suganthan wrote: "The name `deep learning' came about recently." Not so. See references in Sec. X of the report: the ancient term "deep learning" (explicitly mentioned by ACM) was actually first introduced to Machine Learning by Dechter (1986), and to NNs by Aizenberg et al (2000).



Tsvi Achler wrote: "Models which have true feedback (e.g. back to their own inputs) cannot learn by backpropagation but there is plenty of evidence these types of connections exist in the brain and are used during recognition. Thus they get ignored: no talks in universities, no featuring in `premier' journals and no funding. [...] Lastly Feedforward methods are predominant in a large part because they have financial backing from large companies with advertising and clout like Google and the self-driving craze that never fully materialized." This is very misleading - see Sec. A, B, and C of the report which are about recurrent nets with feedback, especially LSTM, heavily used by Google and others, on your smartphone since 2015. Recurrent NNs are general computers that can compute anything your laptop can compute, including any computable model with feedback "back to the inputs." My favorite proof from over 30 years ago: a little subnetwork can be used to build a NAND gate, an!

d a big recurrent network of NAND gates can emulate the CPU of your laptop. (See also answers by Dan Levine, Gary Cottrell, and Juyang Weng.) However, as Asim Roy pointed out, this discussion deviates from the original topic of improper credit assignment. Please use another thread for this.



Randy also wrote: "Should Newton be cited instead of Rumelhart et al, for backprop, as Steve suggested? Seriously, most of the math powering today's models is just calculus and the chain rule." This is so misleading in several ways - see Sec. XII of the report: "Some claim that `backpropagation is just the chain rule of Leibniz (1676) & L'Hopital (1696).' No, it is the efficient way of applying the chain rule to big networks with differentiable nodes (there are also many inefficient ways of doing this). It was not published until 1970" by Seppo Linnainmaa. Of course, the person to cite is Linnainmaa.



Randy also wrote: "how little Einstein added to what was already established by Lorentz and others". Juyang already respectfully objected to this misleading statement.



I agree with what Anand Ramamoorthy wrote: "Setting aside broader aspects of the social quality of the scientific enterprise, let's take a look at a simpler thing; individual duty. Each scientist has a duty to science (as an intellectual discipline) and the scientific community, to uphold fundamental principles informing the conduct of science. Credit should be given wherever it is due - it is a matter of duty, not preference or `strategic vale' or boosting someone because they're a great populariser.  ... Crediting those who disseminate is fine and dandy, but should be for those precise contributions, AND the originators of an idea/method/body of work ought to be recognised - this is perhaps a bit difficult when the work is obscured by history, but not impossible. At any rate, if one has novel information of pertinence w.r.t original work, then the right action is crystal clear."



See also Sec. 5 of the report: "As emphasized earlier:[DLC][HIN] `The inventor of an important method should get credit for inventing it. They may not always be the one who popularizes it. Then the popularizer should get credit for popularizing it - but not for inventing it.' If one "re-invents" something that was already known, and only becomes aware of it later, one must at least clarify it later, and correctly give credit in follow-up papers and presentations."



I also agree with what Zhaoping Li wrote: "I would find it hard to enter a scientific community if it is not scholarly. Each of us can do our bit to be scholarly, to set an example, if not a warning, to the next generation."



Randy also wrote: "Outside of a paper specifically on the history of a field, does it really make sense to "require" everyone to cite obscure old papers that you can't even get a PDF of on google scholar?" This sounds almost like a defense of plagiarism. That's what time stamps of patents and papers are for. A recurring point of the report is: the awardees did not cite the prior art - not even in later surveys written when the true origins of this work were well-known.



Here I fully agree with what Marina Meila wrote: "Since credit is a form of currency in academia, let's look at the `hard currency' rewards of invention. Who gets them? The first company to create a new product usually fails. However, the interesting thing is that society (by this I mean the society most of us we work in) has found it necessary to counteract this, and we have patent laws to protect the rights of the inventors. The point is not whether patent laws are effective or not, it's the social norm they implement. That to protect invention one should pay attention to rewarding the original inventors, whether we get the `product' directly from them or not."



Jürgen









*************************



On 27 Oct 2021, at 10:52, Schmidhuber Juergen <juergen at idsia.ch<mailto:juergen at idsia.ch>> wrote:



Hi, fellow artificial neural network enthusiasts!



The connectionists mailing list is perhaps the oldest mailing list on ANNs, and many neural net pioneers are still subscribed to it. I am hoping that some of them - as well as their contemporaries - might be able to provide additional valuable insights into the history of the field.



Following the great success of massive open online peer review (MOOR) for my 2015 survey of deep learning (now the most cited article ever published in the journal Neural Networks), I've decided to put forward another piece for MOOR. I want to thank the many experts who have already provided me with comments on it. Please send additional relevant references and suggestions for improvements for the following draft directly to me at juergen at idsia.ch<mailto:juergen at idsia.ch>:



https://urldefense.com/v3/__https://people.idsia.ch/*juergen/scientific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ!IaFBkZn1WoaP06s-6kQU-hsGXGLHSoT9wZdNR8Ut7P5YNKGE62JhlbvFXe5hs0s$<https://urldefense.com/v3/__https:/people.idsia.ch/*juergen/scientific-integrity-turing-award-deep-learning.html__;fg!!IKRxdwAv5BmarQ!IaFBkZn1WoaP06s-6kQU-hsGXGLHSoT9wZdNR8Ut7P5YNKGE62JhlbvFXe5hs0s$>



The above is a point-for-point critique of factual errors in ACM's justification of the ACM A. M. Turing Award for deep learning and a critique of the Turing Lecture published by ACM in July 2021. This work can also be seen as a short history of deep learning, at least as far as ACM's errors and the Turing Lecture are concerned.



I know that some view this as a controversial topic. However, it is the very nature of science to resolve controversies through facts. Credit assignment is as core to scientific history as it is to machine learning. My aim is to ensure that the true history of our field is preserved for posterity.



Thank you all in advance for your help!



Jürgen Schmidhuber











-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20211116/b7c62b70/attachment.html>


More information about the Connectionists mailing list