<div dir="ltr"><br><div>I received several messages along the lines "you must not know what you are talking about but this is X and you should read book Y", without the commenters reading the original work on Regulatory Feedback. </div><div>More specifically the X & Y's of the responses are:</div><div>Steve: X= Adaptive Resonance, Y= Steve's book</div><div>Gary: X= Trainable via Backprop, Y= Randy's book</div><div><br></div><div><div>First I want to point out the more novel and counterintuitive an idea, less people that synergize with it, less support from advisor, less support from academic pedigree, despite academic departments and grants stating exactly the opposite. So how does this happen?</div><div>Everyone in self selected committees promoting themselves, dismissive of others and decisions and advice becomes political. The more counterintuitive the less support.</div></div><div><br></div><div>This is a counterintuitive model, where during recognition input information goes to output neurons then feed-back and partially modifies the same inputs that are then reprocessed by the same outputs continuously until neuron activations settle.</div><div>This mechanism does not describe learning or learning through time, it occurs during recognition and does not change weights.</div><div><br></div><div>I really urge reading the original article and demonstration videos before making comments: Achler 2014 "Symbolic Networks for Cognitive Capacities" BICA, <a href="https://www.academia.edu/8357758/Symbolic_neural_networks_for_cognitive_capacities">https://www.academia.edu/8357758/Symbolic_neural_networks_for_cognitive_capacities</a> </div><div>In-depth updated video: <a href="https://www.youtube.com/watch?v=9gTJorBeLi8&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=3">https://www.youtube.com/watch?v=9gTJorBeLi8&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=3</a></div><div>It is not that I am against healthy criticisms and discourse, but I am against dismissiveness without looking into details.</div><div>Moreover I would be happy to be invited to give a talk at your institutions and go over the details within your communities.<br></div><div><br></div><div>I am disappointed with both Gary and Steve, because I met both of you in the past and discussed the model. <br></div><div><br></div><div>In fact, in a paid conference led by Steve, I was relegated to a few minutes introduction for this model that is counter intuitive, because it was assumed to be "Adaptive Resonance" (just like the last message) and it didn't need more time. This paucity of opportunity to dive into the details and quick dismissiveness is a huge part of the problem which contributes to the inhibition of novel ideas as indicated by the two articles about academia I cited.</div><div><br></div><div>Since I am no longer funded and do not have an academic budget I am no longer presenting at paid conferences where this work will be dismissed and relegated to a dark corner and told to listen to the invited speaker or paid Journal with low impact factors. Nor will I pay for books by those who will promote paid books before reading my work.</div><div><br></div><div>No matter how successful one side or another pushes their narrative, this does not change how the brain works. </div><div><br></div><div>I hope the community can realize these problems. I am happy to come to invited talks, go into a deep dive and have the conversations that academics like to project outwardly that they have. </div><div><br></div><div>Sincerely,</div><div>-Tsvi Achler MD/PhD (I put my degrees here in hopes I won't be pointed to any more beginners books)</div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Nov 1, 2021 at 10:59 AM <a href="mailto:gary@ucsd.edu">gary@ucsd.edu</a> <<a href="mailto:gary@eng.ucsd.edu">gary@eng.ucsd.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_default" style="font-family:"times new roman",serif;font-size:large">Tsvi - While I think <a href="https://www.amazon.com/dp/0262650541/" target="_blank">Randy and Yuko's book </a>is actually somewhat better than the online version (and buying choices on amazon start at $9.99), there <b>is</b> <a href="https://compcogneuro.org/" target="_blank">an online version.</a> </div><div class="gmail_default" style="font-family:"times new roman",serif;font-size:large">Randy & Yuko's models take into account feedback and inhibition. </div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler <<a href="mailto:achler@gmail.com" target="_blank">achler@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">Daniel,<div><br></div><div>Does your book include a discussion of Regulatory or Inhibitory Feedback published in several low impact journals between 2008 and 2014 (and in videos subsequently)?</div><div>These are networks where the primary computation is inhibition back to the inputs that activated them and may be very counterintuitive given today's trends. You can almost think of them as the opposite of Hopfield networks.</div><div><br></div><div>I would love to check inside the book but I dont have an academic budget that allows me access to it and that is a huge part of the problem with how information is shared and funding is allocated. I could not get access to any of the text or citations especially Chapter 4: "Competition, Lateral Inhibition, and Short-Term Memory", to weigh in.</div><div><br></div><div>I wish the best circulation for your book, but even if the Regulatory Feedback Model is in the book, that does not change the fundamental problem if the book is not readily available. </div><div><br></div><div>The same goes with Steve Grossberg's book, I cannot easily look inside. With regards to Adaptive Resonance I dont subscribe to lateral inhibition as a predominant mechanism, but I do believe a function such as vigilance is very important during recognition and Adaptive Resonance is one of a very few models that have it. The Regulatory Feedback model I have developed (and Michael Spratling studies a similar model as well) is built primarily using the vigilance type of connections and allows multiple neurons to be evaluated at the same time and continuously during recognition in order to determine which (single or multiple neurons together) match the inputs the best without lateral inhibition.</div><div><br></div><div>Unfortunately within conferences and talks predominated by the Adaptive Resonance crowd I have experienced the familiar dismissiveness and did not have an opportunity to give a proper talk. This goes back to the larger issue of academic politics based on small self-selected committees, the same issues that exist with the feedforward crowd, and pretty much all of academia.</div><div><br></div><div>Today's information age algorithms such as Google's can determine relevance of information and ways to display them, but hegemony of the journal systems and the small committee system of academia developed in the middle ages (and their mutual synergies) block the use of more modern methods in research. Thus we are stuck with this problem, which especially affects those that are trying to introduce something new and counterintuitive, and hence the results described in the two National Bureau of Economic Research articles I cited in my previous message.</div><div><br></div><div><span style="color:rgb(0,0,0)">Thomas, I am happy to have more discussions and/or start a different thread.</span><br></div><div><br></div><div>Sincerely,</div><div>Tsvi Achler MD/PhD</div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S <<a href="mailto:levine@uta.edu" target="_blank">levine@uta.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr">
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
Tsvi,</div>
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
While deep learning and feedforward networks have an outsize popularity, there are plenty of published sources that cover a much wider variety of networks, many of them more biologically based than deep learning. A treatment of a range of neural network approaches,
going from simpler to more complex cognitive functions, is found in my textbook <i>
Introduction to Neural and Cognitive Modeling</i> (3rd edition, Routledge, 2019). Also Steve Grossberg's book
<i>Conscious Mind, Resonant Brain</i> (Oxford, 2021) emphasizes a variety of architectures with a strong biological basis.</div>
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
Best,</div>
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
Dan Levine</div>
<div id="gmail-m_-7809501330019106925gmail-m_8996447038276730094gmail-m_-2305817410909496922gmail-m_7665975300539281535appendonsend"></div>
<hr style="display:inline-block;width:98%">
<div id="gmail-m_-7809501330019106925gmail-m_8996447038276730094gmail-m_-2305817410909496922gmail-m_7665975300539281535divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size:11pt" color="#000000"><b>From:</b> Connectionists <<a href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu" target="_blank">connectionists-bounces@mailman.srv.cs.cmu.edu</a>> on behalf of Tsvi Achler <<a href="mailto:achler@gmail.com" target="_blank">achler@gmail.com</a>><br>
<b>Sent:</b> Saturday, October 30, 2021 3:13 AM<br>
<b>To:</b> Schmidhuber Juergen <<a href="mailto:juergen@idsia.ch" target="_blank">juergen@idsia.ch</a>><br>
<b>Cc:</b> <a href="mailto:connectionists@cs.cmu.edu" target="_blank">connectionists@cs.cmu.edu</a> <<a href="mailto:connectionists@cs.cmu.edu" target="_blank">connectionists@cs.cmu.edu</a>><br>
<b>Subject:</b> Re: Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc.</font>
<div> </div>
</div>
<div>
<div dir="ltr">Since the title of the thread is Scientific Integrity, I want to point out some issues about trends in academia and then especially focusing on the connectionist community.
<div><br>
<div>In general analyzing impact factors etc the most important progress gets silenced until the mainstream picks it up <a href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.nber.org%2Fsystem%2Ffiles%2Fworking_papers%2Fw22180%2Fw22180.pdf%3Ffbclid%3DIwAR1zHhU4wmkrHASTaE-6zwIs6gI9-FxZcCED3BETxUJlMsbN_2hNbmJAmOA&data=04%7C01%7Clevine%40uta.edu%7Cb1a267e3b6a64ada666208d99ca37f6d%7C5cdc5b43d7be4caa8173729e3b0a62d9%7C1%7C0%7C637713048300122043%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=9o%2FzcYY8gZVZiAwyEL5SVI9TEzBWfKf7nfhdWWg8LHU%3D&reserved=0" rel="nofollow noopener" role="link" target="_blank">Impact
Factiors in novel research www.nber.org/.../working_papers/w22180/w22180.pdf</a> and often this may take a generation <a href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.nber.org%2Fdigest%2Fmar16%2Fdoes-science-advance-one-funeral-time%3Ffbclid%3DIwAR1Lodsf1bzje-yQU9DvoZE2__S6R7UPEgY1_LxZCSLdoAYnj-uco0JuyVk&data=04%7C01%7Clevine%40uta.edu%7Cb1a267e3b6a64ada666208d99ca37f6d%7C5cdc5b43d7be4caa8173729e3b0a62d9%7C1%7C0%7C637713048300132034%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=DgxnJTT7MsN5KCzZlA7VAHKrHXVsRsYhopJv0FCwbtw%3D&reserved=0" rel="nofollow noopener" role="link" target="_blank">https://www.nber.org/.../does-science-advance-one-funeral...</a> .</div>
<div><br>
</div>
<div>The connectionist field is stuck on feedforward networks and variants such as with inhibition of competitors (e.g. lateral inhibition), or other variants that are sometimes labeled as recurrent networks for learning time where the feedforward networks
can be rewound in time.</div>
<div><br>
</div>
<div>This stasis is specifically occuring with the popularity of deep learning. This is often portrayed as neurally plausible connectionism but requires an implausible amount of rehearsal and is not connectionist if this rehearsal is not implemented with neurons
(see video link for further clarification).</div>
<div><br>
</div>
<div>Models which have true feedback (e.g. back to their own inputs) cannot learn by backpropagation but there is plenty of evidence these types of connections exist in the brain and are used during recognition. Thus they get ignored: no talks in universities,
no featuring in "premier" journals and no funding. </div>
<div><br>
</div>
<div>But they are important and may negate the need for rehearsal as needed in feedforward methods. Thus may be essential for moving connectionism forward.</div>
<div><br>
</div>
<div>If the community is truly dedicated to brain motivated algorithms, I recommend giving more time to networks other than feedforward networks.</div>
<div><br>
</div>
<div>Video: <a href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dm2qee6j5eew%26list%3DPL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3%26index%3D2&data=04%7C01%7Clevine%40uta.edu%7Cb1a267e3b6a64ada666208d99ca37f6d%7C5cdc5b43d7be4caa8173729e3b0a62d9%7C1%7C0%7C637713048300132034%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=EaEp5zLZ7HkDhsBHmP3x3ObPl8j14B8%2BFcOkkNEWZ9w%3D&reserved=0" target="_blank">https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2</a></div>
<div><br>
</div>
<div>Sincerely,</div>
<div>Tsvi Achler</div>
<div><br>
</div>
<div><br>
</div>
</div>
</div>
<br>
<div>
<div dir="ltr">On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen <<a href="mailto:juergen@idsia.ch" target="_blank">juergen@idsia.ch</a>> wrote:<br>
</div>
<blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Hi, fellow artificial neural network enthusiasts!<br>
<br>
The connectionists mailing list is perhaps the oldest mailing list on ANNs, and many neural net pioneers are still subscribed to it. I am hoping that some of them - as well as their contemporaries - might be able to provide additional valuable insights into
the history of the field.<br>
<br>
Following the great success of massive open online peer review (MOOR) for my 2015 survey of deep learning (now the most cited article ever published in the journal Neural Networks), I've decided to put forward another piece for MOOR. I want to thank the many
experts who have already provided me with comments on it. Please send additional relevant references and suggestions for improvements for the following draft directly to me at
<a href="mailto:juergen@idsia.ch" target="_blank">juergen@idsia.ch</a>:<br>
<br>
<a href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpeople.idsia.ch%2F~juergen%2Fscientific-integrity-turing-award-deep-learning.html&data=04%7C01%7Clevine%40uta.edu%7Cb1a267e3b6a64ada666208d99ca37f6d%7C5cdc5b43d7be4caa8173729e3b0a62d9%7C1%7C0%7C637713048300142030%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=mW3lH7SqKg4EuJfDwKcC2VhwEloC3ndh6kI5gfQ2Ofw%3D&reserved=0" rel="noreferrer" target="_blank">https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html</a><br>
<br>
The above is a point-for-point critique of factual errors in ACM's justification of the ACM A. M. Turing Award for deep learning and a critique of the Turing Lecture published by ACM in July 2021. This work can also be seen as a short history of deep learning,
at least as far as ACM's errors and the Turing Lecture are concerned.<br>
<br>
I know that some view this as a controversial topic. However, it is the very nature of science to resolve controversies through facts. Credit assignment is as core to scientific history as it is to machine learning. My aim is to ensure that the true history
of our field is preserved for posterity.<br>
<br>
Thank you all in advance for your help! <br>
<br>
Jürgen Schmidhuber<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
</blockquote>
</div>
</div>
</div>
</blockquote></div></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>Gary Cottrell 858-534-6640 FAX: 858-534-7029<br></div><div>Computer Science and Engineering 0404<br>IF USING FEDEX INCLUDE THE FOLLOWING LINE: <br>CSE Building, Room 4130<br>University of California San Diego -<br>9500 Gilman Drive # 0404<br>La Jolla, Ca. 92093-0404<br><br></div><div>Email: <a href="mailto:gary@ucsd.edu" target="_blank">gary@ucsd.edu</a><br>Home page: <a href="http://www-cse.ucsd.edu/~gary/" target="_blank">http://www-cse.ucsd.edu/~gary/</a></div><div><span style="font-size:12.8px">Schedule: </span><span style="font-size:12.8px"><a href="http://tinyurl.com/b7gxpwo" style="font-size:12.8px" target="_blank">http://tinyurl.com/b7gxpwo</a></span><br></div><div><div style="font-size:12.8px"><br></div><div style="font-size:12.8px"><p style="font-family:"Book Antiqua";font-style:oblique;font-size:1.2em;color:rgb(0,0,0)">Listen carefully,<br style="font-style:oblique;font-size:1.2em">Neither the Vedas<br style="font-style:oblique;font-size:1.2em">Nor the Qur'an<br style="font-style:oblique;font-size:1.2em">Will teach you this:<br style="font-style:oblique;font-size:1.2em">Put the bit in its mouth,<br style="font-style:oblique;font-size:1.2em">The saddle on its back,<br style="font-style:oblique;font-size:1.2em">Your foot in the stirrup,<br style="font-style:oblique;font-size:1.2em">And ride your wild runaway mind<br style="font-style:oblique;font-size:1.2em">All the way to heaven.</p><p style="font-family:"Book Antiqua";font-style:oblique;font-size:1.2em;color:rgb(0,0,0)"><span style="font-size:1.2em;font-style:oblique">-- Kabir</span></p></div></div>
</div></div></div></div></div></div></div></div></div>
</blockquote></div>