Connectionists: Annotated History of Modern AI and Deep Learning

Schmidhuber Juergen juergen at idsia.ch
Wed Feb 1 05:21:19 EST 2023


Thanks, Thomas, but one correction:

In 1967-68, Amari trained deep MLP’s by stochastic gradient descent (SGD) [GD1], the method proposed in 1951 by Robbins & Monro [STO51-52], which also works for neurons with non-differentiable activation functions (https://people.idsia.ch/~juergen/deep-learning-history.html#2nddl)

This was not yet backpropagation, i.e., "reverse mode of automatic differentiation,” which was published a few years later in 1970 by Seppo Linnainmaa [BP1,4,5]. In 1982, Paul Werbos proposed to use the method to train NNs [BP2], extending ideas in his 1974 thesis. In 1960, Henry J. Kelley already had a precursor of backpropagation in the field of control theory [BPA]. (https://people.idsia.ch/~juergen/deep-learning-history.html#backprop)

Juergen

PS: Richard, I cannot reasonably reply to your earlier comments (citations needed). Have you found anything in the survey (https://arxiv.org/abs/2212.11279) that is factually incorrect? As always, I'll be happy to correct it.


> On 29. Jan 2023, at 16:26, Thomas Trappenberg <tt at cs.dal.ca> wrote:
> 
> Dear All,
> 
> I know the discussions are getting sometimes heated, but I want to thank everyone for it. I meant to contribute earlier pointing to an early paper by Amari-sensei where he used backprop without even detailed explanations. I always thought that for him it was trivial as it is just the chain rule. While Amari-sensei is so inspiring and has given us so many more insights through information geometry, there is also a huge role for people who popularize some ideas and bring the rest of us commoners along.
> 
> I specifically enjoyed comments on deep learning versus neurosymbolic causal learning. I am so excited to see more awareness of possible relations that might bring these fields closer together in the future. What is your favorite venue for such discussions?
> 
> Respectfully, Thomas Trappenberg 
> 
> On Sun, Jan 29, 2023, 8:49 a.m. Richard Loosemore <rloosemore at susaro.com> wrote:
> 
> Dear Imad,
> 
> Fair comment, although I heard Jeurgen say much the same thing 14 years ago, at the AGI conference in 2009, so perhaps you can forgive me for being a little weary of this tune...?
> 
> More *substantively* let me say that this field is such that many ideas/algorithms/theories can be SEEN as variations on other ideas/algorithms/theories, if you look at them from just the right angle.
> 
> If I may add a tongue-in-cheek comment.  I got into this field in 1981 (my first supervisor was John G. Taylor).  By the time the big explosion happened in 1985-7, I was already thinking far beyond that paradigm.  When thinking about what thesis to do, to satisfy my Warwick Psych Dept overseers in 1989, I invented, on paper, many of the ideas that later became Deep Learning.  But those struck me as tedious and ultimately irrelevant, because I wanted to understand the whole system, not make pattern association machines.  This is NOT a claim that I invented anything first, but it IS meant to convey the idea that to people like me who come up with novel ideas all the time, but try to stay focussed on what they consider the genuine prize, all this fighting for a place in the history books seems pathetic.
> 
> There, that's my arrogant thought-for-the day.  You can now safely ignore me again.
> 
> Richard Loosemore
> 
> 
> 
> 
> 
> 
> 
> 
> 
> On 1/27/23 3:29 AM, Imad Khan wrote:
>> Dear Richard,
>> I find your comment a bit unwarranted. You could, however, follow Gary Marcus' way to put forward critical thoughts. I do not necessarily agree with Gary, but I agree with his style. I am reproducing Gary's text below for your convenience. Juergan is an elder of AI and deserves respect (like all of us do). I did go to your website and you're correct to say that AI systems are complex systems and an integrated approach is needed to save another 20 years!
>> 
>> Gary's excerpt:
>> 
>>  
>> Regards,
>> Dr. M. Imad Khan
>> 
>> 
>> On Thu, 26 Jan 2023 at 04:41, Richard Loosemore <rloosemore at susaro.com> wrote:
>> 
>> Please, somebody reassure me that this isn't just another attempt to 
>> rewrite history so that Schmidhuber's lab invented almost everything.
>> 
>> Because at first glance, that's what it looks like.
>> 
>> Richard
>> 
> 
> <image.png>




More information about the Connectionists mailing list