<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
</head>
<body dir="ltr">
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Tsvi,</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
While deep learning and feedforward networks have an outsize popularity, there are plenty of published sources that cover a much wider variety of networks, many of them more biologically based than deep learning.  A treatment of a range of neural network approaches,
 going from simpler to more complex cognitive functions, is found in my textbook <i>
Introduction to Neural and Cognitive Modeling</i> (3rd edition, Routledge, 2019).  Also Steve Grossberg's book
<i>Conscious Mind, Resonant Brain</i> (Oxford, 2021) emphasizes a variety of architectures with a strong biological basis.</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Best,</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Dan Levine</div>
<div id="appendonsend"></div>
<hr style="display:inline-block;width:98%" tabindex="-1">
<div id="divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size:11pt" color="#000000"><b>From:</b> Connectionists <connectionists-bounces@mailman.srv.cs.cmu.edu> on behalf of Tsvi Achler <achler@gmail.com><br>
<b>Sent:</b> Saturday, October 30, 2021 3:13 AM<br>
<b>To:</b> Schmidhuber Juergen <juergen@idsia.ch><br>
<b>Cc:</b> connectionists@cs.cmu.edu <connectionists@cs.cmu.edu><br>
<b>Subject:</b> Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc.</font>
<div> </div>
</div>
<div>
<div dir="ltr">Since the title of the thread is Scientific Integrity, I want to point out some issues about trends in academia and then especially focusing on the connectionist community.
<div><br>
<div>In general analyzing impact factors etc the most important progress gets silenced until the mainstream picks it up <a class="x_gmail-oajrlxb2 x_gmail-g5ia77u1 x_gmail-qu0x051f x_esr5mh6w x_e9989ue4 x_gmail-r7d6kgcz x_gmail-rq0escxv x_gmail-nhd2j8a9 x_gmail-nc684nl6 x_gmail-p7hjln8o x_gmail-kvgmc6g5 x_gmail-cxmmr5t8 x_gmail-oygrvhab x_gmail-hcukyx3x x_gmail-jb3vyjys x_gmail-rz4wbd8a x_gmail-qt6c0cv9 x_gmail-a8nywdso x_gmail-i1ao9s8h x_esuyzwwr x_gmail-f1sip0of x_gmail-lzcic4wl x_gmail-py34i1dx x_gmail-gpro0wi8" href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.nber.org%2Fsystem%2Ffiles%2Fworking_papers%2Fw22180%2Fw22180.pdf%3Ffbclid%3DIwAR1zHhU4wmkrHASTaE-6zwIs6gI9-FxZcCED3BETxUJlMsbN_2hNbmJAmOA&data=04%7C01%7Clevine%40uta.edu%7Cb1a267e3b6a64ada666208d99ca37f6d%7C5cdc5b43d7be4caa8173729e3b0a62d9%7C1%7C0%7C637713048300122043%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=9o%2FzcYY8gZVZiAwyEL5SVI9TEzBWfKf7nfhdWWg8LHU%3D&reserved=0" originalsrc="https://www.nber.org/system/files/working_papers/w22180/w22180.pdf?fbclid=IwAR1zHhU4wmkrHASTaE-6zwIs6gI9-FxZcCED3BETxUJlMsbN_2hNbmJAmOA" shash="FCsm9MZzukocKmGuIZ/Q0krhm0Fy4vBCq7Y82FAbsOe81tEw36KtUvqClFn0a7CV59AecaePP4INWzLUOFXRI40CrkNnWBzmITodKGHo+lkGDqYfSnM87pyQ2RxNlw2dqYx+048l80dmPv4zb8zxJukbn26cWEbRRgc7FYlWaXc=" rel="nofollow noopener" role="link" tabindex="0" target="_blank">Impact
 Factiors in novel research www.nber.org/.../working_papers/w22180/w22180.pdf</a>  and often this may take a generation <a class="x_gmail-oajrlxb2 x_gmail-g5ia77u1 x_gmail-qu0x051f x_esr5mh6w x_e9989ue4 x_gmail-r7d6kgcz x_gmail-rq0escxv x_gmail-nhd2j8a9 x_gmail-nc684nl6 x_gmail-p7hjln8o x_gmail-kvgmc6g5 x_gmail-cxmmr5t8 x_gmail-oygrvhab x_gmail-hcukyx3x x_gmail-jb3vyjys x_gmail-rz4wbd8a x_gmail-qt6c0cv9 x_gmail-a8nywdso x_gmail-i1ao9s8h x_esuyzwwr x_gmail-f1sip0of x_gmail-lzcic4wl x_gmail-py34i1dx x_gmail-gpro0wi8" href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.nber.org%2Fdigest%2Fmar16%2Fdoes-science-advance-one-funeral-time%3Ffbclid%3DIwAR1Lodsf1bzje-yQU9DvoZE2__S6R7UPEgY1_LxZCSLdoAYnj-uco0JuyVk&data=04%7C01%7Clevine%40uta.edu%7Cb1a267e3b6a64ada666208d99ca37f6d%7C5cdc5b43d7be4caa8173729e3b0a62d9%7C1%7C0%7C637713048300132034%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=DgxnJTT7MsN5KCzZlA7VAHKrHXVsRsYhopJv0FCwbtw%3D&reserved=0" originalsrc="https://www.nber.org/digest/mar16/does-science-advance-one-funeral-time?fbclid=IwAR1Lodsf1bzje-yQU9DvoZE2__S6R7UPEgY1_LxZCSLdoAYnj-uco0JuyVk" shash="tFwnf0NnMle70YmKUFaZLIw6gL1G7avzLGrY+YgKTyDbEvb7ZD2EtZXh6kuGZRYaCckpBcZ8Ual2ugYR862aPLmAjOBwZ53fpYhXzUe9Bj5RWqpIsQ1o6rJ87vAfFAXb/1CoXu/U9he/UfX0SP8A9q3TfSy3EWOhQJWCKGom3Sw=" rel="nofollow noopener" role="link" tabindex="0" target="_blank">https://www.nber.org/.../does-science-advance-one-funeral...</a>  .</div>
<div><br>
</div>
<div>The connectionist field is stuck on feedforward networks and variants such as with inhibition of competitors (e.g. lateral inhibition), or other variants that are sometimes labeled as recurrent networks for learning time where the feedforward networks
 can be rewound in time.</div>
<div><br>
</div>
<div>This stasis is specifically occuring with the popularity of deep learning.  This is often portrayed as neurally plausible connectionism but requires an implausible amount of rehearsal and is not connectionist if this rehearsal is not implemented with neurons
 (see video link for further clarification).</div>
<div><br>
</div>
<div>Models which have true feedback (e.g. back to their own inputs) cannot learn by backpropagation but there is plenty of evidence these types of connections exist in the brain and are used during recognition. Thus they get ignored: no talks in universities,
 no featuring in "premier" journals and no funding. </div>
<div><br>
</div>
<div>But they are important and may negate the need for rehearsal as needed in feedforward methods.  Thus may be essential for moving connectionism forward.</div>
<div><br>
</div>
<div>If the community is truly dedicated to brain motivated algorithms, I recommend giving more time to networks other than feedforward networks.</div>
<div><br>
</div>
<div>Video: <a href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dm2qee6j5eew%26list%3DPL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3%26index%3D2&data=04%7C01%7Clevine%40uta.edu%7Cb1a267e3b6a64ada666208d99ca37f6d%7C5cdc5b43d7be4caa8173729e3b0a62d9%7C1%7C0%7C637713048300132034%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=EaEp5zLZ7HkDhsBHmP3x3ObPl8j14B8%2BFcOkkNEWZ9w%3D&reserved=0" originalsrc="https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2" shash="JLlEhw1lJtlaM21mubtxGdd39jqNomtpZnRzVNUMj9rEJoePBFLipZdX9vaOpb26hLn9C/sUtQEL0ZuZPmOYNszFOQ7w402VWDQVByF3JX5MvbDa+M0x1Y734tWG7Pvv0z4Ahb/6a3LpDqHZreVt9vkNPDaOkKY6H91wqm9O5cQ=">https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2</a></div>
<div><br>
</div>
<div>Sincerely,</div>
<div>Tsvi Achler</div>
<div><br>
</div>
<div><br>
</div>
</div>
</div>
<br>
<div class="x_gmail_quote">
<div dir="ltr" class="x_gmail_attr">On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen <<a href="mailto:juergen@idsia.ch">juergen@idsia.ch</a>> wrote:<br>
</div>
<blockquote class="x_gmail_quote" style="margin:0px 0px 0px 0.8ex; border-left:1px solid rgb(204,204,204); padding-left:1ex">
Hi, fellow artificial neural network enthusiasts!<br>
<br>
The connectionists mailing list is perhaps the oldest mailing list on ANNs, and many neural net pioneers are still subscribed to it. I am hoping that some of them - as well as their contemporaries - might be able to provide additional valuable insights into
 the history of the field.<br>
<br>
Following the great success of massive open online peer review (MOOR) for my 2015 survey of deep learning (now the most cited article ever published in the journal Neural Networks), I've decided to put forward another piece for MOOR. I want to thank the many
 experts who have already provided me with comments on it. Please send additional relevant references and suggestions for improvements for the following draft directly to me at
<a href="mailto:juergen@idsia.ch" target="_blank">juergen@idsia.ch</a>:<br>
<br>
<a href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpeople.idsia.ch%2F~juergen%2Fscientific-integrity-turing-award-deep-learning.html&data=04%7C01%7Clevine%40uta.edu%7Cb1a267e3b6a64ada666208d99ca37f6d%7C5cdc5b43d7be4caa8173729e3b0a62d9%7C1%7C0%7C637713048300142030%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=mW3lH7SqKg4EuJfDwKcC2VhwEloC3ndh6kI5gfQ2Ofw%3D&reserved=0" originalsrc="https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html" shash="Whn7LxjGOne6e30d5PTkxT9jZLa1Z/zq5C1FGT8tQtKc3oW1EvP3F1boASerACQgFdYqrqlQtTL17wq8yb+MYK/RvZZwIT/aVQp47WdSlgacZHaKbKk5V3UqO/AfSA7FOTCEXT8XErjQ3fILBmwi2cox2+BbkiWwr84bBezxL3s=" rel="noreferrer" target="_blank">https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html</a><br>
<br>
The above is a point-for-point critique of factual errors in ACM's justification of the ACM A. M. Turing Award for deep learning and a critique of the Turing Lecture published by ACM in July 2021. This work can also be seen as a short history of deep learning,
 at least as far as ACM's errors and the Turing Lecture are concerned.<br>
<br>
I know that some view this as a controversial topic. However, it is the very nature of science to resolve controversies through facts. Credit assignment is as core to scientific history as it is to machine learning. My aim is to ensure that the true history
 of our field is preserved for posterity.<br>
<br>
Thank you all in advance for your help! <br>
<br>
Jürgen Schmidhuber<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
</blockquote>
</div>
</div>
</body>
</html>