Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc.

Tsvi Achler achler at gmail.com
Tue Nov 2 01:18:58 EDT 2021


I forgot one...
Daniel: X = something in the book, Y = Daniel's book

On Mon, Nov 1, 2021 at 5:16 PM Tsvi Achler <achler at gmail.com> wrote:

>
> I received several messages along the lines "you must not know what you
> are talking about but this is X and you should read book Y", without the
> commenters reading the original work on Regulatory Feedback.
> More specifically the X & Y's of the responses are:
> Steve: X= Adaptive Resonance, Y= Steve's book
> Gary: X= Trainable via Backprop, Y= Randy's book
>
> First I want to point out the more novel and counterintuitive an idea,
> less people that synergize with it, less support from advisor, less support
> from academic pedigree, despite academic departments and grants stating
> exactly the opposite. So how does this happen?
> Everyone in self selected committees promoting themselves, dismissive of
> others and decisions and advice becomes political.  The more
> counterintuitive the less support.
>
> This is a counterintuitive model, where during recognition input
> information goes to output neurons then feed-back and partially modifies
> the same inputs that are then reprocessed by the same outputs
> continuously until neuron activations settle.
> This mechanism does not describe learning or learning through time, it
> occurs during recognition and does not change weights.
>
> I really urge reading the original article and demonstration videos before
> making comments: Achler 2014 "Symbolic Networks for Cognitive Capacities"
> BICA,
> https://www.academia.edu/8357758/Symbolic_neural_networks_for_cognitive_capacities
>
> In-depth updated video:
> https://www.youtube.com/watch?v=9gTJorBeLi8&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=3
> It is not that I am against healthy criticisms and discourse, but I am
> against dismissiveness without looking into details.
> Moreover I would be happy to be invited to give a talk at your
> institutions and go over the details within your communities.
>
> I am disappointed with both Gary and Steve, because I met both of you in
> the past and discussed the model.
>
> In fact, in a paid conference led by Steve, I was relegated to a few
> minutes introduction for this model that is counter intuitive, because it
> was assumed to be "Adaptive Resonance" (just like the last message) and it
> didn't need more time.  This paucity of opportunity to dive into the
> details and quick dismissiveness is a huge part of the problem which
> contributes to the inhibition of novel ideas as indicated by the two
> articles about academia I cited.
>
> Since I am no longer funded and do not have an academic budget I am no
> longer presenting at paid conferences where this work will be dismissed and
> relegated to a dark corner and told to listen to the invited speaker or
> paid Journal with low impact factors.  Nor will I pay for books by those
> who will promote paid books before reading my work.
>
> No matter how successful one side or another pushes their narrative, this
> does not change how the brain works.
>
> I hope the community can realize these problems.  I am happy to come to
> invited talks, go into a deep dive and have the conversations that
> academics like to project outwardly that they have.
>
> Sincerely,
> -Tsvi Achler MD/PhD (I put my degrees here in hopes I won't be pointed to
> any more beginners books)
>
>
> On Mon, Nov 1, 2021 at 10:59 AM gary at ucsd.edu <gary at eng.ucsd.edu> wrote:
>
>> Tsvi - While I think Randy and Yuko's book
>> <https://www.amazon.com/dp/0262650541/>is actually somewhat better than
>> the online version (and buying choices on amazon start at $9.99), there
>> *is* an online version. <https://compcogneuro.org/>
>> Randy & Yuko's models take into account feedback and inhibition.
>>
>> On Mon, Nov 1, 2021 at 10:05 AM Tsvi Achler <achler at gmail.com> wrote:
>>
>>> Daniel,
>>>
>>> Does your book include a discussion of Regulatory or Inhibitory Feedback
>>> published in several low impact journals between 2008 and 2014 (and in
>>> videos subsequently)?
>>> These are networks where the primary computation is inhibition back to
>>> the inputs that activated them and may be very counterintuitive given
>>> today's trends.  You can almost think of them as the opposite of Hopfield
>>> networks.
>>>
>>> I would love to check inside the book but I dont have an academic budget
>>> that allows me access to it and that is a huge part of the problem with how
>>> information is shared and funding is allocated. I could not get access to
>>> any of the text or citations especially Chapter 4: "Competition, Lateral
>>> Inhibition, and Short-Term Memory", to weigh in.
>>>
>>> I wish the best circulation for your book, but even if the Regulatory
>>> Feedback Model is in the book, that does not change the fundamental problem
>>> if the book is not readily available.
>>>
>>> The same goes with Steve Grossberg's book, I cannot easily look inside.
>>> With regards to Adaptive Resonance I dont subscribe to lateral inhibition
>>> as a predominant mechanism, but I do believe a function such as vigilance
>>> is very important during recognition and Adaptive Resonance is one of
>>> a very few models that have it.  The Regulatory Feedback model I have
>>> developed (and Michael Spratling studies a similar model as well) is built
>>> primarily using the vigilance type of connections and allows multiple
>>> neurons to be evaluated at the same time and continuously during
>>> recognition in order to determine which (single or multiple neurons
>>> together) match the inputs the best without lateral inhibition.
>>>
>>> Unfortunately within conferences and talks predominated by the Adaptive
>>> Resonance crowd I have experienced the familiar dismissiveness and did not
>>> have an opportunity to give a proper talk. This goes back to the larger
>>> issue of academic politics based on small self-selected committees, the
>>> same issues that exist with the feedforward crowd, and pretty much all of
>>> academia.
>>>
>>> Today's information age algorithms such as Google's can determine
>>> relevance of information and ways to display them, but hegemony of the
>>> journal systems and the small committee system of academia developed in the
>>> middle ages (and their mutual synergies) block the use of more modern
>>> methods in research.  Thus we are stuck with this problem, which especially
>>> affects those that are trying to introduce something new and
>>> counterintuitive, and hence the results described in the two National
>>> Bureau of Economic Research articles I cited in my previous message.
>>>
>>> Thomas, I am happy to have more discussions and/or start a different
>>> thread.
>>>
>>> Sincerely,
>>> Tsvi Achler MD/PhD
>>>
>>>
>>>
>>> On Sun, Oct 31, 2021 at 12:49 PM Levine, Daniel S <levine at uta.edu>
>>> wrote:
>>>
>>>> Tsvi,
>>>>
>>>> While deep learning and feedforward networks have an outsize
>>>> popularity, there are plenty of published sources that cover a much wider
>>>> variety of networks, many of them more biologically based than deep
>>>> learning.  A treatment of a range of neural network approaches, going from
>>>> simpler to more complex cognitive functions, is found in my textbook *
>>>> Introduction to Neural and Cognitive Modeling* (3rd edition,
>>>> Routledge, 2019).  Also Steve Grossberg's book *Conscious Mind,
>>>> Resonant Brain* (Oxford, 2021) emphasizes a variety of architectures
>>>> with a strong biological basis.
>>>>
>>>>
>>>> Best,
>>>>
>>>>
>>>> Dan Levine
>>>> ------------------------------
>>>> *From:* Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu>
>>>> on behalf of Tsvi Achler <achler at gmail.com>
>>>> *Sent:* Saturday, October 30, 2021 3:13 AM
>>>> *To:* Schmidhuber Juergen <juergen at idsia.ch>
>>>> *Cc:* connectionists at cs.cmu.edu <connectionists at cs.cmu.edu>
>>>> *Subject:* Re: Connectionists: Scientific Integrity, the 2021 Turing
>>>> Lecture, etc.
>>>>
>>>> Since the title of the thread is Scientific Integrity, I want to point
>>>> out some issues about trends in academia and then especially focusing on
>>>> the connectionist community.
>>>>
>>>> In general analyzing impact factors etc the most important progress
>>>> gets silenced until the mainstream picks it up Impact Factiors in
>>>> novel research www.nber.org/.../working_papers/w22180/w22180.pdf
>>>> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.nber.org%2Fsystem%2Ffiles%2Fworking_papers%2Fw22180%2Fw22180.pdf%3Ffbclid%3DIwAR1zHhU4wmkrHASTaE-6zwIs6gI9-FxZcCED3BETxUJlMsbN_2hNbmJAmOA&data=04%7C01%7Clevine%40uta.edu%7Cb1a267e3b6a64ada666208d99ca37f6d%7C5cdc5b43d7be4caa8173729e3b0a62d9%7C1%7C0%7C637713048300122043%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=9o%2FzcYY8gZVZiAwyEL5SVI9TEzBWfKf7nfhdWWg8LHU%3D&reserved=0>  and
>>>> often this may take a generation
>>>> https://www.nber.org/.../does-science-advance-one-funeral...
>>>> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.nber.org%2Fdigest%2Fmar16%2Fdoes-science-advance-one-funeral-time%3Ffbclid%3DIwAR1Lodsf1bzje-yQU9DvoZE2__S6R7UPEgY1_LxZCSLdoAYnj-uco0JuyVk&data=04%7C01%7Clevine%40uta.edu%7Cb1a267e3b6a64ada666208d99ca37f6d%7C5cdc5b43d7be4caa8173729e3b0a62d9%7C1%7C0%7C637713048300132034%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=DgxnJTT7MsN5KCzZlA7VAHKrHXVsRsYhopJv0FCwbtw%3D&reserved=0>
>>>>   .
>>>>
>>>> The connectionist field is stuck on feedforward networks and variants
>>>> such as with inhibition of competitors (e.g. lateral inhibition), or other
>>>> variants that are sometimes labeled as recurrent networks for learning time
>>>> where the feedforward networks can be rewound in time.
>>>>
>>>> This stasis is specifically occuring with the popularity of deep
>>>> learning.  This is often portrayed as neurally plausible connectionism but
>>>> requires an implausible amount of rehearsal and is not connectionist if
>>>> this rehearsal is not implemented with neurons (see video link for further
>>>> clarification).
>>>>
>>>> Models which have true feedback (e.g. back to their own inputs) cannot
>>>> learn by backpropagation but there is plenty of evidence these types of
>>>> connections exist in the brain and are used during recognition. Thus they
>>>> get ignored: no talks in universities, no featuring in "premier" journals
>>>> and no funding.
>>>>
>>>> But they are important and may negate the need for rehearsal as needed
>>>> in feedforward methods.  Thus may be essential for moving connectionism
>>>> forward.
>>>>
>>>> If the community is truly dedicated to brain motivated algorithms, I
>>>> recommend giving more time to networks other than feedforward networks.
>>>>
>>>> Video:
>>>> https://www.youtube.com/watch?v=m2qee6j5eew&list=PL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3&index=2
>>>> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dm2qee6j5eew%26list%3DPL4nMP8F3B7bg3cNWWwLG8BX-wER2PeB-3%26index%3D2&data=04%7C01%7Clevine%40uta.edu%7Cb1a267e3b6a64ada666208d99ca37f6d%7C5cdc5b43d7be4caa8173729e3b0a62d9%7C1%7C0%7C637713048300132034%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=EaEp5zLZ7HkDhsBHmP3x3ObPl8j14B8%2BFcOkkNEWZ9w%3D&reserved=0>
>>>>
>>>> Sincerely,
>>>> Tsvi Achler
>>>>
>>>>
>>>>
>>>> On Wed, Oct 27, 2021 at 2:24 AM Schmidhuber Juergen <juergen at idsia.ch>
>>>> wrote:
>>>>
>>>> Hi, fellow artificial neural network enthusiasts!
>>>>
>>>> The connectionists mailing list is perhaps the oldest mailing list on
>>>> ANNs, and many neural net pioneers are still subscribed to it. I am hoping
>>>> that some of them - as well as their contemporaries - might be able to
>>>> provide additional valuable insights into the history of the field.
>>>>
>>>> Following the great success of massive open online peer review (MOOR)
>>>> for my 2015 survey of deep learning (now the most cited article ever
>>>> published in the journal Neural Networks), I've decided to put forward
>>>> another piece for MOOR. I want to thank the many experts who have already
>>>> provided me with comments on it. Please send additional relevant references
>>>> and suggestions for improvements for the following draft directly to me at
>>>> juergen at idsia.ch:
>>>>
>>>>
>>>> https://people.idsia.ch/~juergen/scientific-integrity-turing-award-deep-learning.html
>>>> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpeople.idsia.ch%2F~juergen%2Fscientific-integrity-turing-award-deep-learning.html&data=04%7C01%7Clevine%40uta.edu%7Cb1a267e3b6a64ada666208d99ca37f6d%7C5cdc5b43d7be4caa8173729e3b0a62d9%7C1%7C0%7C637713048300142030%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=mW3lH7SqKg4EuJfDwKcC2VhwEloC3ndh6kI5gfQ2Ofw%3D&reserved=0>
>>>>
>>>> The above is a point-for-point critique of factual errors in ACM's
>>>> justification of the ACM A. M. Turing Award for deep learning and a
>>>> critique of the Turing Lecture published by ACM in July 2021. This work can
>>>> also be seen as a short history of deep learning, at least as far as ACM's
>>>> errors and the Turing Lecture are concerned.
>>>>
>>>> I know that some view this as a controversial topic. However, it is the
>>>> very nature of science to resolve controversies through facts. Credit
>>>> assignment is as core to scientific history as it is to machine learning.
>>>> My aim is to ensure that the true history of our field is preserved for
>>>> posterity.
>>>>
>>>> Thank you all in advance for your help!
>>>>
>>>> Jürgen Schmidhuber
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>
>> --
>> Gary Cottrell 858-534-6640 FAX: 858-534-7029
>> Computer Science and Engineering 0404
>> IF USING FEDEX INCLUDE THE FOLLOWING LINE:
>> CSE Building, Room 4130
>> University of California San Diego                                      -
>> 9500 Gilman Drive # 0404
>> La Jolla, Ca. 92093-0404
>>
>> Email: gary at ucsd.edu
>> Home page: http://www-cse.ucsd.edu/~gary/
>> Schedule: http://tinyurl.com/b7gxpwo
>>
>> Listen carefully,
>> Neither the Vedas
>> Nor the Qur'an
>> Will teach you this:
>> Put the bit in its mouth,
>> The saddle on its back,
>> Your foot in the stirrup,
>> And ride your wild runaway mind
>> All the way to heaven.
>>
>> -- Kabir
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20211101/197cec27/attachment.html>


More information about the Connectionists mailing list