Connectionists: Annotated History of Modern AI and Deep Learning: Early binary, linear, and continuous-nonlinear neural networks, some which included learning

Grossberg, Stephen steve at bu.edu
Wed Jan 25 13:13:38 EST 2023


Dear Juergen,

Thanks for mentioning the Ising model!

As you know, it is a binary model, with just two states, and it does not learn.


My Magnum Opus
https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552

reviews some of the early binary neural network models, such as the McCulloch-Pitts, Caianiello, and Rosenblatt models, starting on p. 64, before going on to review early linear models that included learning, like the Adeline and Madeline models of Bernie Widrow and the Brain-State-in-a-Box model of Jim Anderson, then continuous and nonlinear models of various kinds, including models that are still used today.

Best,

Steve

________________________________
From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> on behalf of Schmidhuber Juergen <juergen at idsia.ch>
Sent: Wednesday, January 25, 2023 11:40 AM
To: connectionists at cs.cmu.edu <connectionists at cs.cmu.edu>
Subject: Re: Connectionists: Annotated History of Modern AI and Deep Learning: Early recurrent neural networks for serial verbal learning and associative pattern learning

Dear Steve,

thanks - I hope you noticed that the survey mentions your 1969 work!

And of course it also mentions the origin of this whole recurrent network business: the Ising model or Lenz-Ising model introduced a century ago. See Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture

https://people.idsia.ch/~juergen/deep-learning-history.html#rnn

"The first non-learning RNN architecture (the Ising model or Lenz-Ising model) was introduced and analyzed by physicists Ernst Ising and Wilhelm Lenz in the 1920s [L20][I24,I25][K41][W45][T22]. It settles into an equilibrium state in response to input conditions, and is the foundation of the first learning RNNs ...”

Jürgen


> On 25. Jan 2023, at 18:42, Grossberg, Stephen <steve at bu.edu> wrote:
>
> Dear Juergen and Connectionists colleagues,
>
> In his attached email below, Juergen mentioned a 1972 article of my friend and colleague, Shun-Ichi Amari, about recurrent neural networks that learn.
>
> Here are a couple of my own early articles from 1969 and 1971 about such networks. I introduced them to explain paradoxical data about serial verbal learning, notably the bowed serial position effect:
>
> Grossberg, S. (1969). On the serial learning of lists. Mathematical Biosciences, 4, 201-253.
> https://sites.bu.edu/steveg/files/2016/06/Gro1969MBLists.pdf
>
> Grossberg, S. and Pepe, J. (1971). Spiking threshold and overarousal effects in serial learning. Journal of Statistical Physics, 3, 95-125.
> https://sites.bu.edu/steveg/files/2016/06/GroPepe1971JoSP.pdf
>
> Juergen also mentioned that Shun-Ichi's work was a precursor of what some people call the Hopfield model, whose most cited articles were published in 1982 and 1984.
>
> I actually started publishing articles on this topic starting in the 1960s. Here are two of them:
>
> Grossberg, S. (1969). On learning and energy-entropy dependence in recurrent and nonrecurrent signed networks. Journal of Statistical Physics, 1, 319-350.
> https://sites.bu.edu/steveg/files/2016/06/Gro1969JourStatPhy.pdf
>
> Grossberg, S. (1971). Pavlovian pattern learning by nonlinear neural networks. Proceedings of the National Academy of Sciences, 68, 828-831.
> https://sites.bu.edu/steveg/files/2016/06/Gro1971ProNatAcaSci.pdf
>
> An early use of Lyapunov functions to prove global limit theorems in associative recurrent neural networks is found in the following 1980 PNAS article:
>
> Grossberg, S. (1980). Biological competition: Decision rules, pattern formation, and oscillations. Proceedings of the National Academy of Sciences, 77, 2338-2342.
> https://sites.bu.edu/steveg/files/2016/06/Gro1980PNAS.pdf
>
> Subsequent results culminated in my 1983 article with Michael Cohen, which was in press when the Hopfield (1982) article was published:
>
> Cohen, M.A. and Grossberg, S. (1983). Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Transactions on Systems, Man, and Cybernetics, SMC-13, 815-826.
>  https://sites.bu.edu/steveg/files/2016/06/CohGro1983IEEE.pdf
>
> Our article introduced a general class of neural networks for associative spatial pattern learning, which included the Additive and Shunting neural networks that I had earlier introduced, as well as a Lyapunov function for all of them.
>
> This article proved global limit theorems about all these systems using that Lyapunov function.
>
> The Hopfield article describes the special case of the Additive model.
>
> His article proved no theorems.
>
> Best to all,
>
> Steve
>
> Stephen Grossberg
> http://en.wikipedia.org/wiki/Stephen_Grossberg
> http://scholar.google.com/citations?user=3BIV70wAAAAJ&hl=en
> https://youtu.be/9n5AnvFur7I
> https://www.youtube.com/watch?v=_hBye6JQCh4
> https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552
>
> Wang Professor of Cognitive and Neural Systems
> Director, Center for Adaptive Systems
> Professor Emeritus of Mathematics & Statistics,
>        Psychological & Brain Sciences, and Biomedical Engineering
> Boston University
> sites.bu.edu/steveg
> steve at bu.edu
>
> From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> on behalf of Schmidhuber Juergen <juergen at idsia.ch>
> Sent: Wednesday, January 25, 2023 8:44 AM
> To: connectionists at cs.cmu.edu <connectionists at cs.cmu.edu>
> Subject: Re: Connectionists: Annotated History of Modern AI and Deep Learning
>
> Some are not aware of this historic tidbit in Sec. 4 of the survey: half a century ago, Shun-Ichi Amari published a learning recurrent neural network (1972) which was later called the Hopfield network.
>
> https://people.idsia.ch/~juergen/deep-learning-history.html#rnn
>
> Jürgen
>
>
>
>
> > On 13. Jan 2023, at 11:13, Schmidhuber Juergen <juergen at idsia.ch> wrote:
> >
> > Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey):
> >
> > https://arxiv.org/abs/2212.11279
> >
> > https://people.idsia.ch/~juergen/deep-learning-history.html
> >
> > This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch if you can spot any remaining error or have suggestions for improvements.
> >
> > Happy New Year!
> >
> > Jürgen
> >


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230125/b5e2a291/attachment.html>


More information about the Connectionists mailing list