Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc.
gary@ucsd.edu
gary at eng.ucsd.edu
Mon Nov 1 13:59:00 EDT 2021
Tsvi - While I think Randy and Yuko's book
<https://www.amazon.com/dp/0262650541/>is actually somewhat better than the
online version (and buying choices on amazon start at $9.99), there *is* an
online version. <https://compcogneuro.org/>
Randy & Yuko's models take into account feedback and inhibition.
On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler <achler at gmail.com> wrote:
> Daniel,
>
> Does your book include a discussion of Regulatory or Inhibitory Feedback
> published in several low impact journals between 2008 and 2014 (and in
> videos subsequently)?
> These are networks where the primary computation is inhibition back to the
> inputs that activated them and may be very counterintuitive given today's
> trends. You can almost think of them as the opposite of Hopfield networks.
>
> I would love to check inside the book but I dont have an academic budget
> that allows me access to it and that is a huge part of the problem with how
> information is shared and funding is allocated. I could not get access to
> any of the text or citations especially Chapter 4: "Competition, Lateral
> Inhibition, and Short-Term Memory", to weigh in.
>
> I wish the best circulation for your book, but even if the Regulatory
> Feedback Model is in the book, that does not change the fundamental problem
> if the book is not readily available.
>
> The same goes with Steve Grossberg's book, I cannot easily look inside.
> With regards to Adaptive Resonance I dont subscribe to lateral inhibition
> as a predominant mechanism, but I do believe a function such as vigilance
> is very important during recognition and Adaptive Resonance is one of
> a very few models that have it. The Regulatory Feedback model I have
> developed (and Michael Spratling studies a similar model as well) is built
> primarily using the vigilance type of connections and allows multiple
> neurons to be evaluated at the same time and continuously during
> recognition in order to determine which (single or multiple neurons
> together) match the inputs the best without lateral inhibition.
>
> Unfortunately within conferences and talks predominated by the Adaptive
> Resonance crowd I have experienced the familiar dismissiveness and did not
> have an opportunity to give a proper talk. This goes back to the larger
> issue of academic politics based on small self-selected committees, the
> same issues that exist with the feedforward crowd, and pretty much all of
> academia.
>
> Today's information age algorithms such as Google's can determine
> relevance of information and ways to display them, but hegemony of the
> journal systems and the small committee system of academia developed in the
> middle ages (and their mutual synergies) block the use of more modern
> methods in research. Thus we are stuck with this problem, which especially
> affects those that are trying to introduce something new and
> counterintuitive, and hence the results described in the two National
> Bureau of Economic Research articles I cited in my previous message.
>
> Thomas, I am happy to have more discussions and/or start a different
> thread.
>
> Sincerely,
> Tsvi Achler MD/PhD
>
>
>
> On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S <levine at uta.edu> wrote:
>
>> Tsvi,
>>
>> While deep learning and feedforward networks have an outsize popularity,
>> there are plenty of published sources that cover a much wider variety of
>> networks, many of them more biologically based than deep learning. A
>> treatment of a range of neural network approaches, going from simpler to
>> more complex cognitive functions, is found in my textbook * Introduction
>> to Neural and Cognitive Modeling* (3rd edition, Routledge, 2019). Also
>> Steve Grossberg's book *Conscious Mind, Resonant Brain* (Oxford, 2021)
>> emphasizes a variety of architectures with a strong biological basis.
>>
>>
>> Best,
>>
>>
>> Dan Levine
>> ------------------------------
>> *From:* Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu>
>> on behalf of Tsvi Achler <achler at gmail.com>
>> *Sent:* Saturday, October 30, 2021 3:13 AM
>> *To:* Schmidhuber Juergen <juergen at idsia.ch>
>> *Cc:* connectionists at cs.cmu.edu <connectionists at cs.cmu.edu>
>> *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing
>> Lecture, etc.
>>
>> Since the title of the thread is Scientific Integrity, I want to point
>> out some issues about trends in academia and then especially focusing on
>> the connectionist community.
>>
>> In general analyzing impact factors etc the most important progress gets
>> silenced until the mainstream picks it up Impact Factiors in novel
>> research www.nber.org/.../working_papers/w22180/w22180.pdf
>> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.nber.org%2Fsystem%2Ffiles%2Fworking_papers%2Fw22180%2Fw22180.pdf%3Ffbclid%3DIwAR1zHhU4wmkrHASTaE-6zwIs6gI9-FxZcCED3BETxUJlMsbN_2hNbmJAmOA&data=04%7C01%7Clevine%40uta.edu%7Cb1a267e3b6a64ada666208d99ca37f6d%7C5cdc5b43d7be4caa8173729e3b0a62d9%7C1%7C0%7C637713048300122043%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=9o%2FzcYY8gZVZiAwyEL5SVI9TEzBWfKf7nfhdWWg8LHU%3D&reserved=0> and
>> often this may take a generation
>> https://www.nber.org/.../does-science-advance-one-funeral...
>> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.nber.org%2Fdigest%2Fmar16%2Fdoes-science-advance-one-funeral-time%3Ffbclid%3DIwAR1Lodsf1bzje-yQU9DvoZE2__S6R7UPEgY1_LxZCSLdoAYnj-uco0JuyVk&data=04%7C01%7Clevine%40uta.edu%7Cb1a267e3b6a64ada666208d99ca37f6d%7C5cdc5b43d7be4caa8173729e3b0a62d9%7C1%7C0%7C637713048300132034%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=DgxnJTT7MsN5KCzZlA7VAHKrHXVsRsYhopJv0FCwbtw%3D&reserved=0>
>> .
>>
>> The connectionist field is stuck on feedforward networks and variants
>> such as with inhibition of competitors (e.g. lateral inhibition), or other
>> variants that are sometimes labeled as recurrent networks for learning time
>> where the feedforward networks can be rewound in time.
>>
>> This stasis is specifically occuring with the popularity of deep
>> learning. This is often portrayed as neurally plausible connectionism but
>> requires an implausible amount of rehearsal and is not connectionist if
>> this rehearsal is not implemented with neurons (see video link for further
>> clarification).
>>
>> Models which have true feedback (e.g. back to their own inputs) cannot
>> learn by backpropagation but there is plenty of evidence these types of
>> connections exist in the brain and are used during recognition. Thus they
>> get ignored: no talks in universities, no featuring in "premier" journals
>> and no funding.
>>
>> But they are important and may negate the need for rehearsal as needed in
>> feedforward methods. Thus may be essential for moving connectionism
>> forward.
>>
>> If the community is truly dedicated to brain motivated algorithms, I
>> recommend giving more time to networks other than feedforward networks.
>>
>> Video:
>> https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2
>> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dm2qee6j5eew%26list%3DPL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3%26index%3D2&data=04%7C01%7Clevine%40uta.edu%7Cb1a267e3b6a64ada666208d99ca37f6d%7C5cdc5b43d7be4caa8173729e3b0a62d9%7C1%7C0%7C637713048300132034%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=EaEp5zLZ7HkDhsBHmP3x3ObPl8j14B8%2BFcOkkNEWZ9w%3D&reserved=0>
>>
>> Sincerely,
>> Tsvi Achler
>>
>>
>>
>> On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen <juergen at idsia.ch>
>> wrote:
>>
>> Hi, fellow artificial neural network enthusiasts!
>>
>> The connectionists mailing list is perhaps the oldest mailing list on
>> ANNs, and many neural net pioneers are still subscribed to it. I am hoping
>> that some of them - as well as their contemporaries - might be able to
>> provide additional valuable insights into the history of the field.
>>
>> Following the great success of massive open online peer review (MOOR) for
>> my 2015 survey of deep learning (now the most cited article ever published
>> in the journal Neural Networks), I've decided to put forward another piece
>> for MOOR. I want to thank the many experts who have already provided me
>> with comments on it. Please send additional relevant references and
>> suggestions for improvements for the following draft directly to me at
>> juergen at idsia.ch:
>>
>>
>> https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html
>> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpeople.idsia.ch%2F~juergen%2Fscientific-integrity-turing-award-deep-learning.html&data=04%7C01%7Clevine%40uta.edu%7Cb1a267e3b6a64ada666208d99ca37f6d%7C5cdc5b43d7be4caa8173729e3b0a62d9%7C1%7C0%7C637713048300142030%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=mW3lH7SqKg4EuJfDwKcC2VhwEloC3ndh6kI5gfQ2Ofw%3D&reserved=0>
>>
>> The above is a point-for-point critique of factual errors in ACM's
>> justification of the ACM A. M. Turing Award for deep learning and a
>> critique of the Turing Lecture published by ACM in July 2021. This work can
>> also be seen as a short history of deep learning, at least as far as ACM's
>> errors and the Turing Lecture are concerned.
>>
>> I know that some view this as a controversial topic. However, it is the
>> very nature of science to resolve controversies through facts. Credit
>> assignment is as core to scientific history as it is to machine learning.
>> My aim is to ensure that the true history of our field is preserved for
>> posterity.
>>
>> Thank you all in advance for your help!
>>
>> Jürgen Schmidhuber
>>
>>
>>
>>
>>
>>
>>
>>
>>
--
Gary Cottrell 858-534-6640 FAX: 858-534-7029
Computer Science and Engineering 0404
IF USING FEDEX INCLUDE THE FOLLOWING LINE:
CSE Building, Room 4130
University of California San Diego -
9500 Gilman Drive # 0404
La Jolla, Ca. 92093-0404
Email: gary at ucsd.edu
Home page: http://www-cse.ucsd.edu/~gary/
Schedule: http://tinyurl.com/b7gxpwo
Listen carefully,
Neither the Vedas
Nor the Qur'an
Will teach you this:
Put the bit in its mouth,
The saddle on its back,
Your foot in the stirrup,
And ride your wild runaway mind
All the way to heaven.
-- Kabir
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20211101/d9ae3d76/attachment.html>
More information about the Connectionists
mailing list