Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc.

Danko Nikolic danko.nikolic at gmail.com
Fri Nov 5 05:35:43 EDT 2021


This entire thread of discussion reminds me of this famous quote:

"Academic Politics Are So Vicious Because the Stakes Are So Small"
(there is no agreement on who to attribute the quote to)

For me personally, the message is: Take it easy. None of this is as big of
a deal as it may seem at moments. Have more fun. Worry less.

Danko


Dr. Danko Nikolić
www.danko-nikolic.com
https://www.linkedin.com/in/danko-nikolic/
--- A progress usually starts with an insight ---


On Fri, Nov 5, 2021 at 8:09 AM Tsvi Achler <achler at gmail.com> wrote:

> Lastly Feedforward methods are predominant in a large part because they
> have financial backing from large companies with advertising and clout like
> Google and the self-driving craze that never fully materialized.
>
> Feedforward methods are not fully connectionist unless rehearsal for
> learning is implemented with neurons.  That means storing all patterns,
> mixing them randomly and then presenting to a network to learn.  As far as
> I know, no one is doing this in the community, so feedforward methods are
> only partially connectionist.  By allowing popularity to predominate and
> choking off funds and presentation of alternatives we are cheating
> ourselves from pursuing other more rigorous brain-like methods.
>
> Sincerely,
> -Tsvi
>
>
> On Tue, Nov 2, 2021 at 7:08 PM Tsvi Achler <achler at gmail.com> wrote:
>
>> Gary- Thanks for the accessible online link to the book.
>>
>> I looked especially at the inhibitory feedback section of the book which
>> describes an Air Conditioner AC type feedback.
>> It then describes a general field-like inhibition based on all
>> activations in the layer.  It also describes the role of inhibition in
>> sparsity and feedforward inhibition,
>>
>> The feedback described in Regulatory Feedback is similar to the AC
>> feedback but occurs for each neuron individually, vis-a-vis its inputs.
>> Thus for context, regulatory feedback is not a field-like inhibition, it
>> is very directed based on the neurons that are activated and their inputs.
>> This sort of regulation is also the foundation of Homeostatic Plasticity
>> findings (albeit with changes in Homeostatic regulation in experiments
>> occurring in a slower time scale).  The regulatory feedback model describes
>> the effect and role in recognition of those regulated connections in real
>> time during recognition.
>>
>> I would be happy to discuss further and collaborate on writing about the
>> differences between the approaches for the next book or review.
>>
>> And I want to point out to folks, that the system is based on politics
>> and that is why certain work is not cited like it should, but even worse
>> these politics are here in the group today and they continue to very
>> strongly influence decisions in the connectionist community and holds us
>> back.
>>
>> Sincerely,
>> -Tsvi
>>
>> On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu <gary at eng.ucsd.edu> wrote:
>>
>>> Tsvi - While I think Randy and Yuko's book
>>> <https://www.amazon.com/dp/0262650541/>is actually somewhat better than
>>> the online version (and buying choices on amazon start at $9.99), there
>>> *is* an online version. <https://compcogneuro.org/>
>>> Randy & Yuko's models take into account feedback and inhibition.
>>>
>>> On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler <achler at gmail.com> wrote:
>>>
>>>> Daniel,
>>>>
>>>> Does your book include a discussion of Regulatory or Inhibitory
>>>> Feedback published in several low impact journals between 2008 and 2014
>>>> (and in videos subsequently)?
>>>> These are networks where the primary computation is inhibition back to
>>>> the inputs that activated them and may be very counterintuitive given
>>>> today's trends.  You can almost think of them as the opposite of Hopfield
>>>> networks.
>>>>
>>>> I would love to check inside the book but I dont have an academic
>>>> budget that allows me access to it and that is a huge part of the problem
>>>> with how information is shared and funding is allocated. I could not get
>>>> access to any of the text or citations especially Chapter 4: "Competition,
>>>> Lateral Inhibition, and Short-Term Memory", to weigh in.
>>>>
>>>> I wish the best circulation for your book, but even if the Regulatory
>>>> Feedback Model is in the book, that does not change the fundamental problem
>>>> if the book is not readily available.
>>>>
>>>> The same goes with Steve Grossberg's book, I cannot easily look
>>>> inside.  With regards to Adaptive Resonance I dont subscribe to lateral
>>>> inhibition as a predominant mechanism, but I do believe a function such as
>>>> vigilance is very important during recognition and Adaptive Resonance is
>>>> one of a very few models that have it.  The Regulatory Feedback model I
>>>> have developed (and Michael Spratling studies a similar model as well) is
>>>> built primarily using the vigilance type of connections and allows multiple
>>>> neurons to be evaluated at the same time and continuously during
>>>> recognition in order to determine which (single or multiple neurons
>>>> together) match the inputs the best without lateral inhibition.
>>>>
>>>> Unfortunately within conferences and talks predominated by the Adaptive
>>>> Resonance crowd I have experienced the familiar dismissiveness and did not
>>>> have an opportunity to give a proper talk. This goes back to the larger
>>>> issue of academic politics based on small self-selected committees, the
>>>> same issues that exist with the feedforward crowd, and pretty much all of
>>>> academia.
>>>>
>>>> Today's information age algorithms such as Google's can determine
>>>> relevance of information and ways to display them, but hegemony of the
>>>> journal systems and the small committee system of academia developed in the
>>>> middle ages (and their mutual synergies) block the use of more modern
>>>> methods in research.  Thus we are stuck with this problem, which especially
>>>> affects those that are trying to introduce something new and
>>>> counterintuitive, and hence the results described in the two National
>>>> Bureau of Economic Research articles I cited in my previous message.
>>>>
>>>> Thomas, I am happy to have more discussions and/or start a different
>>>> thread.
>>>>
>>>> Sincerely,
>>>> Tsvi Achler MD/PhD
>>>>
>>>>
>>>>
>>>> On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S <levine at uta.edu>
>>>> wrote:
>>>>
>>>>> Tsvi,
>>>>>
>>>>> While deep learning and feedforward networks have an outsize
>>>>> popularity, there are plenty of published sources that cover a much wider
>>>>> variety of networks, many of them more biologically based than deep
>>>>> learning.  A treatment of a range of neural network approaches, going from
>>>>> simpler to more complex cognitive functions, is found in my textbook *
>>>>> Introduction to Neural and Cognitive Modeling* (3rd edition,
>>>>> Routledge, 2019).  Also Steve Grossberg's book *Conscious Mind,
>>>>> Resonant Brain* (Oxford, 2021) emphasizes a variety of architectures
>>>>> with a strong biological basis.
>>>>>
>>>>>
>>>>> Best,
>>>>>
>>>>>
>>>>> Dan Levine
>>>>> ------------------------------
>>>>> *From:* Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu>
>>>>> on behalf of Tsvi Achler <achler at gmail.com>
>>>>> *Sent:* Saturday, October 30, 2021 3:13 AM
>>>>> *To:* Schmidhuber Juergen <juergen at idsia.ch>
>>>>> *Cc:* connectionists at cs.cmu.edu <connectionists at cs.cmu.edu>
>>>>> *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing
>>>>> Lecture, etc.
>>>>>
>>>>> Since the title of the thread is Scientific Integrity, I want to point
>>>>> out some issues about trends in academia and then especially focusing on
>>>>> the connectionist community.
>>>>>
>>>>> In general analyzing impact factors etc the most important progress
>>>>> gets silenced until the mainstream picks it up Impact Factiors in
>>>>> novel research www.nber.org/.../working_papers/w22180/w22180.pdf
>>>>> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.nber.org%2Fsystem%2Ffiles%2Fworking_papers%2Fw22180%2Fw22180.pdf%3Ffbclid%3DIwAR1zHhU4wmkrHASTaE-6zwIs6gI9-FxZcCED3BETxUJlMsbN_2hNbmJAmOA&data=04%7C01%7Clevine%40uta.edu%7Cb1a267e3b6a64ada666208d99ca37f6d%7C5cdc5b43d7be4caa8173729e3b0a62d9%7C1%7C0%7C637713048300122043%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=9o%2FzcYY8gZVZiAwyEL5SVI9TEzBWfKf7nfhdWWg8LHU%3D&reserved=0>  and
>>>>> often this may take a generation
>>>>> https://www.nber.org/.../does-science-advance-one-funeral...
>>>>> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.nber.org%2Fdigest%2Fmar16%2Fdoes-science-advance-one-funeral-time%3Ffbclid%3DIwAR1Lodsf1bzje-yQU9DvoZE2__S6R7UPEgY1_LxZCSLdoAYnj-uco0JuyVk&data=04%7C01%7Clevine%40uta.edu%7Cb1a267e3b6a64ada666208d99ca37f6d%7C5cdc5b43d7be4caa8173729e3b0a62d9%7C1%7C0%7C637713048300132034%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=DgxnJTT7MsN5KCzZlA7VAHKrHXVsRsYhopJv0FCwbtw%3D&reserved=0>
>>>>>   .
>>>>>
>>>>> The connectionist field is stuck on feedforward networks and variants
>>>>> such as with inhibition of competitors (e.g. lateral inhibition), or other
>>>>> variants that are sometimes labeled as recurrent networks for learning time
>>>>> where the feedforward networks can be rewound in time.
>>>>>
>>>>> This stasis is specifically occuring with the popularity of deep
>>>>> learning.  This is often portrayed as neurally plausible connectionism but
>>>>> requires an implausible amount of rehearsal and is not connectionist if
>>>>> this rehearsal is not implemented with neurons (see video link for further
>>>>> clarification).
>>>>>
>>>>> Models which have true feedback (e.g. back to their own inputs) cannot
>>>>> learn by backpropagation but there is plenty of evidence these types of
>>>>> connections exist in the brain and are used during recognition. Thus they
>>>>> get ignored: no talks in universities, no featuring in "premier" journals
>>>>> and no funding.
>>>>>
>>>>> But they are important and may negate the need for rehearsal as needed
>>>>> in feedforward methods.  Thus may be essential for moving connectionism
>>>>> forward.
>>>>>
>>>>> If the community is truly dedicated to brain motivated algorithms, I
>>>>> recommend giving more time to networks other than feedforward networks.
>>>>>
>>>>> Video:
>>>>> https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2
>>>>> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dm2qee6j5eew%26list%3DPL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3%26index%3D2&data=04%7C01%7Clevine%40uta.edu%7Cb1a267e3b6a64ada666208d99ca37f6d%7C5cdc5b43d7be4caa8173729e3b0a62d9%7C1%7C0%7C637713048300132034%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=EaEp5zLZ7HkDhsBHmP3x3ObPl8j14B8%2BFcOkkNEWZ9w%3D&reserved=0>
>>>>>
>>>>> Sincerely,
>>>>> Tsvi Achler
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen <juergen at idsia.ch>
>>>>> wrote:
>>>>>
>>>>> Hi, fellow artificial neural network enthusiasts!
>>>>>
>>>>> The connectionists mailing list is perhaps the oldest mailing list on
>>>>> ANNs, and many neural net pioneers are still subscribed to it. I am hoping
>>>>> that some of them - as well as their contemporaries - might be able to
>>>>> provide additional valuable insights into the history of the field.
>>>>>
>>>>> Following the great success of massive open online peer review (MOOR)
>>>>> for my 2015 survey of deep learning (now the most cited article ever
>>>>> published in the journal Neural Networks), I've decided to put forward
>>>>> another piece for MOOR. I want to thank the many experts who have already
>>>>> provided me with comments on it. Please send additional relevant references
>>>>> and suggestions for improvements for the following draft directly to me at
>>>>> juergen at idsia.ch:
>>>>>
>>>>>
>>>>> https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html
>>>>> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpeople.idsia.ch%2F~juergen%2Fscientific-integrity-turing-award-deep-learning.html&data=04%7C01%7Clevine%40uta.edu%7Cb1a267e3b6a64ada666208d99ca37f6d%7C5cdc5b43d7be4caa8173729e3b0a62d9%7C1%7C0%7C637713048300142030%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=mW3lH7SqKg4EuJfDwKcC2VhwEloC3ndh6kI5gfQ2Ofw%3D&reserved=0>
>>>>>
>>>>> The above is a point-for-point critique of factual errors in ACM's
>>>>> justification of the ACM A. M. Turing Award for deep learning and a
>>>>> critique of the Turing Lecture published by ACM in July 2021. This work can
>>>>> also be seen as a short history of deep learning, at least as far as ACM's
>>>>> errors and the Turing Lecture are concerned.
>>>>>
>>>>> I know that some view this as a controversial topic. However, it is
>>>>> the very nature of science to resolve controversies through facts. Credit
>>>>> assignment is as core to scientific history as it is to machine learning.
>>>>> My aim is to ensure that the true history of our field is preserved for
>>>>> posterity.
>>>>>
>>>>> Thank you all in advance for your help!
>>>>>
>>>>> Jürgen Schmidhuber
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>
>>> --
>>> Gary Cottrell 858-534-6640 FAX: 858-534-7029
>>> Computer Science and Engineering 0404
>>> IF USING FEDEX INCLUDE THE FOLLOWING LINE:
>>> CSE Building, Room 4130
>>> University of California San Diego                                      -
>>> 9500 Gilman Drive # 0404
>>> La Jolla, Ca. 92093-0404
>>>
>>> Email: gary at ucsd.edu
>>> Home page: http://www-cse.ucsd.edu/~gary/
>>> Schedule: http://tinyurl.com/b7gxpwo
>>>
>>> Listen carefully,
>>> Neither the Vedas
>>> Nor the Qur'an
>>> Will teach you this:
>>> Put the bit in its mouth,
>>> The saddle on its back,
>>> Your foot in the stirrup,
>>> And ride your wild runaway mind
>>> All the way to heaven.
>>>
>>> -- Kabir
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20211105/30d0203a/attachment.html>


More information about the Connectionists mailing list