Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc.

Thomas Trappenberg tt at cs.dal.ca
Sun Oct 31 16:30:48 EDT 2021


Tsvi et al., may I add a bit (maybe we need a new thread, but found all the
contributions quite stimulating).

Seeing so many real world applications of connectionist neural networks is
great. It is amazing what we can finally do with computer vision and NLP
with complete training (e.g., I view deep networks as overtrained
associative databases that are practically exhaustedly trained, and this is
good for engineering solutions).

However, I think many of us are also really interested in understanding how
the brain thinks, and I think what Tsvi is saying is that having these
scientific discussions seems to be difficult with the overwhelming
Google/NeurIPS growd. There are some good computational neuroscience
venues. However, the problem I have is that many of these have primarily a
cellular focus and much less on the system or biological information
processing principles side. Besides Cosyne, has anyone some suggestions for
meetings in this area?

Cheers, Thomas

PS: I am on sabbatical and wonder if anyone has suggestions to further
these discussions or collaborations in this area.

On Sun, Oct 31, 2021, 4:16 PM Tsvi Achler <achler at gmail.com> wrote:

> Since the title of the thread is Scientific Integrity, I want to point out
> some issues about trends in academia and then especially focusing on the
> connectionist community.
>
> In general analyzing impact factors etc the most important progress gets
> silenced until the mainstream picks it up Impact Factiors in novel
> research www.nber.org/.../working_papers/w22180/w22180.pdf
> <https://www.nber.org/system/files/working_papers/w22180/w22180.pdf?fbclid=IwAR1zHhU4wmkrHASTaE-6zwIs6gI9-FxZcCED3BETxUJlMsbN_2hNbmJAmOA>  and
> often this may take a generation
> https://www.nber.org/.../does-science-advance-one-funeral...
> <https://www.nber.org/digest/mar16/does-science-advance-one-funeral-time?fbclid=IwAR1Lodsf1bzje-yQU9DvoZE2__S6R7UPEgY1_LxZCSLdoAYnj-uco0JuyVk>
>   .
>
> The connectionist field is stuck on feedforward networks and variants such
> as with inhibition of competitors (e.g. lateral inhibition), or other
> variants that are sometimes labeled as recurrent networks for learning time
> where the feedforward networks can be rewound in time.
>
> This stasis is specifically occuring with the popularity of deep
> learning.  This is often portrayed as neurally plausible connectionism but
> requires an implausible amount of rehearsal and is not connectionist if
> this rehearsal is not implemented with neurons (see video link for further
> clarification).
>
> Models which have true feedback (e.g. back to their own inputs) cannot
> learn by backpropagation but there is plenty of evidence these types of
> connections exist in the brain and are used during recognition. Thus they
> get ignored: no talks in universities, no featuring in "premier" journals
> and no funding.
>
> But they are important and may negate the need for rehearsal as needed in
> feedforward methods.  Thus may be essential for moving connectionism
> forward.
>
> If the community is truly dedicated to brain motivated algorithms, I
> recommend giving more time to networks other than feedforward networks.
>
> Video:
> https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2
>
> Sincerely,
> Tsvi Achler
>
>
>
> On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen <juergen at idsia.ch>
> wrote:
>
>> Hi, fellow artificial neural network enthusiasts!
>>
>> The connectionists mailing list is perhaps the oldest mailing list on
>> ANNs, and many neural net pioneers are still subscribed to it. I am hoping
>> that some of them - as well as their contemporaries - might be able to
>> provide additional valuable insights into the history of the field.
>>
>> Following the great success of massive open online peer review (MOOR) for
>> my 2015 survey of deep learning (now the most cited article ever published
>> in the journal Neural Networks), I've decided to put forward another piece
>> for MOOR. I want to thank the many experts who have already provided me
>> with comments on it. Please send additional relevant references and
>> suggestions for improvements for the following draft directly to me at
>> juergen at idsia.ch:
>>
>>
>> https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html
>>
>> The above is a point-for-point critique of factual errors in ACM's
>> justification of the ACM A. M. Turing Award for deep learning and a
>> critique of the Turing Lecture published by ACM in July 2021. This work can
>> also be seen as a short history of deep learning, at least as far as ACM's
>> errors and the Turing Lecture are concerned.
>>
>> I know that some view this as a controversial topic. However, it is the
>> very nature of science to resolve controversies through facts. Credit
>> assignment is as core to scientific history as it is to machine learning.
>> My aim is to ensure that the true history of our field is preserved for
>> posterity.
>>
>> Thank you all in advance for your help!
>>
>> Jürgen Schmidhuber
>>
>>
>>
>>
>>
>>
>>
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20211031/91b9e12c/attachment.html>


More information about the Connectionists mailing list