Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc.

Serafim Rodrigues srodrigues at bcamath.org
Sun Nov 7 03:44:51 EST 2021


The points made by Richard, Danko and Tsvi are rock solid to me! I fully
second their points!
By the way Tsvi, thank you for proposing a novel idea to the community and
in fact I went through your paper and video. I was not aware of your work
and I found it very stimulating!

Science needs these debates and more open minded Scientists...

With many thanks
Serafim



On Sun, 7 Nov 2021 at 00:28, Richard Loosemore <rloosemore at susaro.com>
wrote:

>
> Adam,
>
> 1) Tsvi Achler has already done the things you ask, many times over, so it
> behooves you to check for that before you tell him to do it. Instructing
> someone to "clearly communicate the novel contribution of your approach"
> when they have already done is is an insult.
>
> 2) The whole point of this discussion is that when someone "makes an
> argument clearly" the community is NOT "incredibly open to that."  Quite
> the opposite: the community's attention is fickle, tribal, fad-driven, and
> fundamentally broken.
>
> 3) When you say that you "have trouble believing that Google or anyone
> else will be dismissive of a computational approach that actually works,"
> that truly boggles the mind.
>
>     a) There is no precise definition for "actually works" -- there is no
> global measure of goodness in the space of approaches.
>
>     b) Getting the attention of someone at e.g. Google is a non-trivial
> feat in itself: just ignoring outsiders is, for Google, a perfectly
> acceptable option.
>
>     c) What do you suppose would be the reaction of an engineer at Google
> who gets handed a paper by their boss, and is asked "What do you think of
> this?"  Suppose the paper describes an approach that is inimicable to what
> that engineer has been doing their whole career. So much so, that if Google
> goes all-in on this new thing, the engineer's skillset will be devalued to
> junk status.  What would the engineer do? They would say "I read it. It's
> just garbage."
>
> Best
>
> Richard Loosemore
>
>
>
> On 11/5/21 1:01 PM, Adam Krawitz wrote:
>
> Tsvi,
>
>
>
> I’m just a lurker on this list, with no skin in the game, but perhaps that
> gives me a more neutral perspective. In the spirit of progress:
>
>
>
>    1. If you have a neural network approach that you feel provides a new
>    and important perspective on cognitive processes, then write up a paper
>    making that argument clearly, and I think you will find that the community
>    is incredibly open to that. Yes, if they see holes in the approach they
>    will be pointed out, but that is all part of the scientific exchange.
>    Examples of this approach include: Elman (1990) Finding Structure in Time,
>    Kohonen (1990) The Self-Organizing Map, Tenenbaum et al. (2011) How to Grow
>    a Mind: Statistics, Structure, and Abstraction (not neural nets, but a
>    “new” approach to modelling cognition). I’m sure others can provide more
>    examples.
>    2. I’m much less familiar with how things work on the applied side,
>    but I have trouble believing that Google or anyone else will be dismissive
>    of a computational approach that actually works. Why would they? They just
>    want to solve problems efficiently. Demonstrate that your approach can
>    solve a problem more effectively (or at least as effectively) as the
>    existing approaches, and they will come running. Examples of this include:
>    Tesauro’s TD-Gammon, which was influential in demonstrating the power of
>    RL, and LeCun et al.’s convolutional NN for the MNIST digits.
>
>
>
> Clearly communicate the novel contribution of your approach and I think
> you will find a receptive audience.
>
>
>
> Thanks,
>
> Adam
>
>
>
>
>
> *From:* Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu>
> <connectionists-bounces at mailman.srv.cs.cmu.edu> *On Behalf Of *Tsvi Achler
> *Sent:* November 4, 2021 9:46 AM
> *To:* gary at ucsd.edu
> *Cc:* connectionists at cs.cmu.edu
> *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing
> Lecture, etc.
>
>
>
> Lastly Feedforward methods are predominant in a large part because they
> have financial backing from large companies with advertising and clout like
> Google and the self-driving craze that never fully materialized.
>
>
>
> Feedforward methods are not fully connectionist unless rehearsal for
> learning is implemented with neurons.  That means storing all patterns,
> mixing them randomly and then presenting to a network to learn.  As far as
> I know, no one is doing this in the community, so feedforward methods are
> only partially connectionist.  By allowing popularity to predominate and
> choking off funds and presentation of alternatives we are cheating
> ourselves from pursuing other more rigorous brain-like methods.
>
>
>
> Sincerely,
>
> -Tsvi
>
>
>
>
>
> On Tue, Nov 2, 2021 at 7:08 PM Tsvi Achler <achler at gmail.com> wrote:
>
> Gary- Thanks for the accessible online link to the book.
>
>
>
> I looked especially at the inhibitory feedback section of the book which
> describes an Air Conditioner AC type feedback.
>
> It then describes a general field-like inhibition based on all activations
> in the layer.  It also describes the role of inhibition in sparsity and
> feedforward inhibition,
>
>
>
> The feedback described in Regulatory Feedback is similar to the AC
> feedback but occurs for each neuron individually, vis-a-vis its inputs.
>
> Thus for context, regulatory feedback is not a field-like inhibition, it
> is very directed based on the neurons that are activated and their inputs.
> This sort of regulation is also the foundation of Homeostatic Plasticity
> findings (albeit with changes in Homeostatic regulation in experiments
> occurring in a slower time scale).  The regulatory feedback model describes
> the effect and role in recognition of those regulated connections in real
> time during recognition.
>
>
>
> I would be happy to discuss further and collaborate on writing about the
> differences between the approaches for the next book or review.
>
>
>
> And I want to point out to folks, that the system is based on politics and
> that is why certain work is not cited like it should, but even worse these
> politics are here in the group today and they continue to very
> strongly influence decisions in the connectionist community and holds us
> back.
>
>
>
> Sincerely,
>
> -Tsvi
>
>
>
> On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu <gary at eng.ucsd.edu> wrote:
>
> Tsvi - While I think Randy and Yuko's book
> <https://www.amazon.com/dp/0262650541/>is actually somewhat better than
> the online version (and buying choices on amazon start at $9.99), there
> *is* an online version. <https://compcogneuro.org/>
>
> Randy & Yuko's models take into account feedback and inhibition.
>
>
>
> On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler <achler at gmail.com> wrote:
>
> Daniel,
>
>
>
> Does your book include a discussion of Regulatory or Inhibitory Feedback
> published in several low impact journals between 2008 and 2014 (and in
> videos subsequently)?
>
> These are networks where the primary computation is inhibition back to the
> inputs that activated them and may be very counterintuitive given today's
> trends.  You can almost think of them as the opposite of Hopfield networks.
>
>
>
> I would love to check inside the book but I dont have an academic budget
> that allows me access to it and that is a huge part of the problem with how
> information is shared and funding is allocated. I could not get access to
> any of the text or citations especially Chapter 4: "Competition, Lateral
> Inhibition, and Short-Term Memory", to weigh in.
>
>
>
> I wish the best circulation for your book, but even if the Regulatory
> Feedback Model is in the book, that does not change the fundamental problem
> if the book is not readily available.
>
>
>
> The same goes with Steve Grossberg's book, I cannot easily look inside.
> With regards to Adaptive Resonance I dont subscribe to lateral inhibition
> as a predominant mechanism, but I do believe a function such as vigilance
> is very important during recognition and Adaptive Resonance is one of
> a very few models that have it.  The Regulatory Feedback model I have
> developed (and Michael Spratling studies a similar model as well) is built
> primarily using the vigilance type of connections and allows multiple
> neurons to be evaluated at the same time and continuously during
> recognition in order to determine which (single or multiple neurons
> together) match the inputs the best without lateral inhibition.
>
>
>
> Unfortunately within conferences and talks predominated by the Adaptive
> Resonance crowd I have experienced the familiar dismissiveness and did not
> have an opportunity to give a proper talk. This goes back to the larger
> issue of academic politics based on small self-selected committees, the
> same issues that exist with the feedforward crowd, and pretty much all of
> academia.
>
>
>
> Today's information age algorithms such as Google's can determine
> relevance of information and ways to display them, but hegemony of the
> journal systems and the small committee system of academia developed in the
> middle ages (and their mutual synergies) block the use of more modern
> methods in research.  Thus we are stuck with this problem, which especially
> affects those that are trying to introduce something new and
> counterintuitive, and hence the results described in the two National
> Bureau of Economic Research articles I cited in my previous message.
>
>
>
> Thomas, I am happy to have more discussions and/or start a different
> thread.
>
>
>
> Sincerely,
>
> Tsvi Achler MD/PhD
>
>
>
>
>
>
>
> On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S <levine at uta.edu> wrote:
>
> Tsvi,
>
>
>
> While deep learning and feedforward networks have an outsize popularity,
> there are plenty of published sources that cover a much wider variety of
> networks, many of them more biologically based than deep learning.  A
> treatment of a range of neural network approaches, going from simpler to
> more complex cognitive functions, is found in my textbook *Introduction
> to Neural and Cognitive Modeling* (3rd edition, Routledge, 2019).  Also
> Steve Grossberg's book *Conscious Mind, Resonant Brain* (Oxford, 2021)
> emphasizes a variety of architectures with a strong biological basis.
>
>
>
>
>
> Best,
>
>
>
>
>
> Dan Levine
> ------------------------------
>
> *From:* Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> on
> behalf of Tsvi Achler <achler at gmail.com>
> *Sent:* Saturday, October 30, 2021 3:13 AM
> *To:* Schmidhuber Juergen <juergen at idsia.ch>
> *Cc:* connectionists at cs.cmu.edu <connectionists at cs.cmu.edu>
> *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing
> Lecture, etc.
>
>
>
> Since the title of the thread is Scientific Integrity, I want to point out
> some issues about trends in academia and then especially focusing on the
> connectionist community.
>
>
>
> In general analyzing impact factors etc the most important progress gets
> silenced until the mainstream picks it up Impact Factiors in novel
> research www.nber.org/.../working_papers/w22180/w22180.pdf
> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.nber.org%2Fsystem%2Ffiles%2Fworking_papers%2Fw22180%2Fw22180.pdf%3Ffbclid%3DIwAR1zHhU4wmkrHASTaE-6zwIs6gI9-FxZcCED3BETxUJlMsbN_2hNbmJAmOA&data=04%7C01%7Clevine%40uta.edu%7Cb1a267e3b6a64ada666208d99ca37f6d%7C5cdc5b43d7be4caa8173729e3b0a62d9%7C1%7C0%7C637713048300122043%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=9o%2FzcYY8gZVZiAwyEL5SVI9TEzBWfKf7nfhdWWg8LHU%3D&reserved=0>  and
> often this may take a generation
> https://www.nber.org/.../does-science-advance-one-funeral...
> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.nber.org%2Fdigest%2Fmar16%2Fdoes-science-advance-one-funeral-time%3Ffbclid%3DIwAR1Lodsf1bzje-yQU9DvoZE2__S6R7UPEgY1_LxZCSLdoAYnj-uco0JuyVk&data=04%7C01%7Clevine%40uta.edu%7Cb1a267e3b6a64ada666208d99ca37f6d%7C5cdc5b43d7be4caa8173729e3b0a62d9%7C1%7C0%7C637713048300132034%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=DgxnJTT7MsN5KCzZlA7VAHKrHXVsRsYhopJv0FCwbtw%3D&reserved=0>
>   .
>
>
>
> The connectionist field is stuck on feedforward networks and variants such
> as with inhibition of competitors (e.g. lateral inhibition), or other
> variants that are sometimes labeled as recurrent networks for learning time
> where the feedforward networks can be rewound in time.
>
>
>
> This stasis is specifically occuring with the popularity of deep
> learning.  This is often portrayed as neurally plausible connectionism but
> requires an implausible amount of rehearsal and is not connectionist if
> this rehearsal is not implemented with neurons (see video link for further
> clarification).
>
>
>
> Models which have true feedback (e.g. back to their own inputs) cannot
> learn by backpropagation but there is plenty of evidence these types of
> connections exist in the brain and are used during recognition. Thus they
> get ignored: no talks in universities, no featuring in "premier" journals
> and no funding.
>
>
>
> But they are important and may negate the need for rehearsal as needed in
> feedforward methods.  Thus may be essential for moving connectionism
> forward.
>
>
>
> If the community is truly dedicated to brain motivated algorithms, I
> recommend giving more time to networks other than feedforward networks.
>
>
>
> Video:
> https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2
> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dm2qee6j5eew%26list%3DPL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3%26index%3D2&data=04%7C01%7Clevine%40uta.edu%7Cb1a267e3b6a64ada666208d99ca37f6d%7C5cdc5b43d7be4caa8173729e3b0a62d9%7C1%7C0%7C637713048300132034%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=EaEp5zLZ7HkDhsBHmP3x3ObPl8j14B8%2BFcOkkNEWZ9w%3D&reserved=0>
>
>
>
> Sincerely,
>
> Tsvi Achler
>
>
>
>
>
>
>
> On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen <juergen at idsia.ch>
> wrote:
>
> Hi, fellow artificial neural network enthusiasts!
>
> The connectionists mailing list is perhaps the oldest mailing list on
> ANNs, and many neural net pioneers are still subscribed to it. I am hoping
> that some of them - as well as their contemporaries - might be able to
> provide additional valuable insights into the history of the field.
>
> Following the great success of massive open online peer review (MOOR) for
> my 2015 survey of deep learning (now the most cited article ever published
> in the journal Neural Networks), I've decided to put forward another piece
> for MOOR. I want to thank the many experts who have already provided me
> with comments on it. Please send additional relevant references and
> suggestions for improvements for the following draft directly to me at
> juergen at idsia.ch:
>
>
> https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html
> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpeople.idsia.ch%2F~juergen%2Fscientific-integrity-turing-award-deep-learning.html&data=04%7C01%7Clevine%40uta.edu%7Cb1a267e3b6a64ada666208d99ca37f6d%7C5cdc5b43d7be4caa8173729e3b0a62d9%7C1%7C0%7C637713048300142030%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=mW3lH7SqKg4EuJfDwKcC2VhwEloC3ndh6kI5gfQ2Ofw%3D&reserved=0>
>
> The above is a point-for-point critique of factual errors in ACM's
> justification of the ACM A. M. Turing Award for deep learning and a
> critique of the Turing Lecture published by ACM in July 2021. This work can
> also be seen as a short history of deep learning, at least as far as ACM's
> errors and the Turing Lecture are concerned.
>
> I know that some view this as a controversial topic. However, it is the
> very nature of science to resolve controversies through facts. Credit
> assignment is as core to scientific history as it is to machine learning.
> My aim is to ensure that the true history of our field is preserved for
> posterity.
>
> Thank you all in advance for your help!
>
> Jürgen Schmidhuber
>
>
>
>
>
>
>
>
>
>
> --
>
> Gary Cottrell 858-534-6640 FAX: 858-534-7029
>
> Computer Science and Engineering 0404
> IF USING FEDEX INCLUDE THE FOLLOWING LINE:
> CSE Building, Room 4130
> University of California San Diego                                      -
> 9500 Gilman Drive # 0404
> La Jolla, Ca. 92093-0404
>
> Email: gary at ucsd.edu
> Home page: http://www-cse.ucsd.edu/~gary/
>
> Schedule: http://tinyurl.com/b7gxpwo
>
>
>
> *Listen carefully,*
> *Neither the Vedas*
> *Nor the Qur'an*
> *Will teach you this:*
> *Put the bit in its mouth,*
> *The saddle on its back,*
> *Your foot in the stirrup,*
> *And ride your wild runaway mind*
> *All the way to heaven.*
>
> *-- Kabir*
>
>
>

-- 
Serafim Rodrigues
Group Leader
*BCAM - *Basque Center for Applied Mathematics
Alameda de Mazarredo, 14
E-48009 Bilbao, Basque Country - Spain
Tel. +34 946 567 842
srodrigues at bcamath.org | www.bcamath.org/srodrigues

*(**matematika mugaz bestalde)*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20211107/b6c9f45c/attachment.html>


More information about the Connectionists mailing list