Connectionists: Annotated History of Modern AI and Deep Learning

Schmidhuber Juergen juergen at idsia.ch
Sun Feb 5 09:14:00 EST 2023


Dear Andreas, 

you wrote: "At this time internet did not exist and many discoveries were done in parallel without people knowing what other ….”

The problem is: even much later, when the true history was well-known, the book [M69] was not corrected, and others promulgated a rather self-serving revisionist history of deep learning [S20][DL3][DL3a][T22] that simply ignored the deep learning of the 1960s. 

The deontology of science requires: if one "re-invents" something that was already published, and only becomes aware of it later, one must at least clarify it later and correctly give credit in all follow-up papers and presentations [DLC][T22]; it is nothing short of fraud to continue to claim to be the first who invented something once you know this is not the case.

Juergen



> On 3. Feb 2023, at 13:51, Andrzej Wichert <andreas.wichert at tecnico.ulisboa.pt> wrote:
> 
> Dear Jurgen,
> 
> At this time internet did not exist and many discoveries were done in parallel without people knowing what other dis. 
> There was also some research published in Russian language which it seems is lost. But the truth is that we only see what
> defines us now our time. Many researchers think that DL is a breakthrough; as people thought before about symbolical AI…
> Quite sure in some years there will be a new wave when some thing becomes in.
> "The definition of artificial intelligence leads to the paradox of a discipline whose principal purpose is its own definition.” from my book Principles of Quantum Artificial Intelligence...
> Like Serge Gainsbourg sang:
> Jusqu'à neuf, c'est OK, tu es "in"
> Après quoi, tu es KO, tu es "out"
> And when something is out, we do not see it (symbolical AI) any more...
> 
> Best,
> 
> Andreas
> 
> --------------------------------------------------------------------------------------------------
> Prof. Auxiliar Andreas Wichert   
> 
> http://web.tecnico.ulisboa.pt/andreas.wichert/
> -
> https://www.amazon.com/author/andreaswichert
> 
> Instituto Superior Técnico - Universidade de Lisboa
> Campus IST-Taguspark 
> Avenida Professor Cavaco Silva                 Phone: +351  214233231
> 2744-016 Porto Salvo, Portugal
> 
>> On 3 Feb 2023, at 02:41, Schmidhuber Juergen <juergen at idsia.ch> wrote:
>> 
>> PS: the weirdest thing is that later Minsky & Papert published a famous book (1969) [M69] that cited neither Amari’s SGD-based deep learning (1967-68) nor the original layer-by-layer deep learning (1965) by Ivakhnenko & Lapa [DEEP1-2][DL2]. 
>> 
>> Minsky & Papert's book [M69] showed that shallow NNs without hidden layers are very limited. Duh! That’s exactly why people like Ivakhnenko & Lapa and Amari had earlier overcome this problem through _deep_ learning with many learning layers. 
>> 
>> Minsky & Papert apparently were unaware of this. Unfortunately, even later they failed to correct their book [T22]. 
>> 
>> Much later, others took this as an opportunity to promulgate a rather self-serving revisionist history of deep learning [S20][DL3][DL3a][T22] that simply ignored pre-Minsky deep learning.
>> 
>> However, as Elvis Presley put it, "Truth is like the sun. You can shut it out for a time, but it ain't goin' away.” [T22]
>> 
>> Juergen
>> 
>> 
>> 
>>> On 26. Jan 2023, at 16:29, Schmidhuber Juergen <juergen at idsia.ch> wrote:
>>> 
>>> And in 1967-68, the same Shun-Ichi Amari trained multilayer perceptrons (MLPs) with many layers by stochastic gradient descent (SGD) in end-to-end fashion. See Sec. 7 of the survey: https://people.idsia.ch/~juergen/deep-learning-history.html#2nddl
>>> 
>>> Amari's implementation [GD2,GD2a] (with his student Saito) learned internal representations in a five layer MLP with two modifiable layers, which was trained to classify non-linearily separable pattern classes. 
>>> 
>>> Back then compute was billions of times more expensive than today.   
>>> 
>>> To my knowledge, this was the first implementation of learning internal representations through SGD-based deep learning. 
>>> 
>>> If anyone knows of an earlier one then please let me know :)
>>> 
>>> Jürgen 
>>> 
>>> 
>>>> On 25. Jan 2023, at 16:44, Schmidhuber Juergen <juergen at idsia.ch> wrote:
>>>> 
>>>> Some are not aware of this historic tidbit in Sec. 4 of the survey: half a century ago, Shun-Ichi Amari published a learning recurrent neural network (1972) which was later called the Hopfield network.
>>>> 
>>>> https://people.idsia.ch/~juergen/deep-learning-history.html#rnn
>>>> 
>>>> Jürgen
>>>> 
>>>> 
>>>> 
>>>> 
>>>>> On 13. Jan 2023, at 11:13, Schmidhuber Juergen <juergen at idsia.ch> wrote:
>>>>> 
>>>>> Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): 
>>>>> 
>>>>> https://arxiv.org/abs/2212.11279
>>>>> 
>>>>> https://people.idsia.ch/~juergen/deep-learning-history.html
>>>>> 
>>>>> This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements.
>>>>> 
>>>>> Happy New Year!
>>>>> 
>>>>> Jürgen
>>>>> 
>> 
>> 
> 




More information about the Connectionists mailing list