Connectionists: Annotated History of Modern AI and Deep Learning

Sean Manion stmanion at gmail.com
Wed Feb 1 11:08:36 EST 2023


Thank you all for this wonderful discussion and (annotated!) history
lesson. While my current project is focused primarily on the pre-AI history
timeframe of 1936(?)-56, I anticipate the next phase will benefit from
Juergen's work and many of the additional items mentioned here.

I do have an ask of the group:

I am currently going through Alcibiades Malapi-Nelson's insightful *The
Nature of the Machine and the Collapse of Cybernetics *(2017). He
references a Macy meeting on May 13-14, 1942 focused on Hypnosis and
Conditioned Reflexes where Arturo Rosenbleuth was able to present
declassified aspects of Norbert Wiener and Julian Bigelow's anti-aircraft
work on their behalf (which pre-cursored the trio's 1943 paper).

The book's author notes that no written record exists of this 1942 Macy
meeting, which preceded Macy meetings on Cybernetics by a few years. He
references Steve Heims 1991 work on the topic and highlights some details,
e.g. Warren McCulloch was at this earlier meeting, but no original material
about it seems to exist otherwise.

I am planning to go through Heims 1991 book (his 1980 book on Wiener and
Von Neumann is phenomenal). My question is whether anyone here has any
knowledge of or suggestions towards where to find info on the May 1942 Macy
meeting on Hypnosis and Conditioned Reflex beyond that source.

If so, please let me know on the thread or reaching out separately.

Thank you!

Sean

On Wed, Feb 1, 2023, 10:43 AM Schmidhuber Juergen <juergen at idsia.ch> wrote:

> Thanks, Thomas, but one correction:
>
> In 1967-68, Amari trained deep MLP’s by stochastic gradient descent (SGD)
> [GD1], the method proposed in 1951 by Robbins & Monro [STO51-52], which
> also works for neurons with non-differentiable activation functions (
> https://people.idsia.ch/~juergen/deep-learning-history.html#2nddl)
>
> This was not yet backpropagation, i.e., "reverse mode of automatic
> differentiation,” which was published a few years later in 1970 by Seppo
> Linnainmaa [BP1,4,5]. In 1982, Paul Werbos proposed to use the method to
> train NNs [BP2], extending ideas in his 1974 thesis. In 1960, Henry J.
> Kelley already had a precursor of backpropagation in the field of control
> theory [BPA]. (
> https://people.idsia.ch/~juergen/deep-learning-history.html#backprop)
>
> Juergen
>
> PS: Richard, I cannot reasonably reply to your earlier comments (citations
> needed). Have you found anything in the survey (
> https://arxiv.org/abs/2212.11279) that is factually incorrect? As always,
> I'll be happy to correct it.
>
>
> > On 29. Jan 2023, at 16:26, Thomas Trappenberg <tt at cs.dal.ca> wrote:
> >
> > Dear All,
> >
> > I know the discussions are getting sometimes heated, but I want to thank
> everyone for it. I meant to contribute earlier pointing to an early paper
> by Amari-sensei where he used backprop without even detailed explanations.
> I always thought that for him it was trivial as it is just the chain rule.
> While Amari-sensei is so inspiring and has given us so many more insights
> through information geometry, there is also a huge role for people who
> popularize some ideas and bring the rest of us commoners along.
> >
> > I specifically enjoyed comments on deep learning versus neurosymbolic
> causal learning. I am so excited to see more awareness of possible
> relations that might bring these fields closer together in the future. What
> is your favorite venue for such discussions?
> >
> > Respectfully, Thomas Trappenberg
> >
> > On Sun, Jan 29, 2023, 8:49 a.m. Richard Loosemore <rloosemore at susaro.com>
> wrote:
> >
> > Dear Imad,
> >
> > Fair comment, although I heard Jeurgen say much the same thing 14 years
> ago, at the AGI conference in 2009, so perhaps you can forgive me for being
> a little weary of this tune...?
> >
> > More *substantively* let me say that this field is such that many
> ideas/algorithms/theories can be SEEN as variations on other
> ideas/algorithms/theories, if you look at them from just the right angle.
> >
> > If I may add a tongue-in-cheek comment.  I got into this field in 1981
> (my first supervisor was John G. Taylor).  By the time the big explosion
> happened in 1985-7, I was already thinking far beyond that paradigm.  When
> thinking about what thesis to do, to satisfy my Warwick Psych Dept
> overseers in 1989, I invented, on paper, many of the ideas that later
> became Deep Learning.  But those struck me as tedious and ultimately
> irrelevant, because I wanted to understand the whole system, not make
> pattern association machines.  This is NOT a claim that I invented anything
> first, but it IS meant to convey the idea that to people like me who come
> up with novel ideas all the time, but try to stay focussed on what they
> consider the genuine prize, all this fighting for a place in the history
> books seems pathetic.
> >
> > There, that's my arrogant thought-for-the day.  You can now safely
> ignore me again.
> >
> > Richard Loosemore
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > On 1/27/23 3:29 AM, Imad Khan wrote:
> >> Dear Richard,
> >> I find your comment a bit unwarranted. You could, however, follow Gary
> Marcus' way to put forward critical thoughts. I do not necessarily agree
> with Gary, but I agree with his style. I am reproducing Gary's text below
> for your convenience. Juergan is an elder of AI and deserves respect (like
> all of us do). I did go to your website and you're correct to say that AI
> systems are complex systems and an integrated approach is needed to save
> another 20 years!
> >>
> >> Gary's excerpt:
> >>
> >>
> >> Regards,
> >> Dr. M. Imad Khan
> >>
> >>
> >> On Thu, 26 Jan 2023 at 04:41, Richard Loosemore <rloosemore at susaro.com>
> wrote:
> >>
> >> Please, somebody reassure me that this isn't just another attempt to
> >> rewrite history so that Schmidhuber's lab invented almost everything.
> >>
> >> Because at first glance, that's what it looks like.
> >>
> >> Richard
> >>
> >
> > <image.png>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230201/7317d1f6/attachment-0001.html>


More information about the Connectionists mailing list