Connectionists: Stephen Hanson in conversation with Geoff Hinton

Juyang Weng juyang.weng at gmail.com
Tue Feb 15 12:32:49 EST 2022


Dear Asim,
(A) You wrote: "you mean a certain kind of mathematical formulation can
give rise to consciousness?"
The maximum likelihood mathematical formulation is not a sufficient
condition for conscious learning, but a necessary condition.
This local minima issue is a CENTRAL issue for people on this list.
The local minima problems, including the Turing Aware 2018, have been
giving neural networks a lot of doubts .
For example, minimizing supervised motor errors in backprop deep learning
have two consequences:
(1) It has violated the sensorimotor recurrence that is necessary for
conscious learning (all big data violated it) and
(2) It requires Post-Selections which amounts to a rarely
disclosed protocol flaw:  Any such products have a big uncertainty:   Each
customer of CNN, LSTM and ELM systems has to cast a dice.
One of (1) and (2) above is sufficient for such neural networks to become
impossible to learn consciousness.
Of course, as I posted yesterday, there are about 20 million-dollar
problems that prevent such neural networks to earn consciousness.  All
these 20 million-dollar problems must be solved in order to claim to learn
consciousness.

(B) You need to spend more time, as you are a mathematician (you can
understand).
The ML optimality in DN and minimizing fitting errors in CNN, LSTM and ELM
are greatly different.
The former optimality has been mathematically proven (Weng IJIS 2015).
The latter optimality is never proven.  The formulation is superficial
(only about the fitting error, not studying distributions of system
weights).   I have proved below that error-backprop depends on casting a
dice.
J. Weng, "On Post Selections Using Test Sets (PSUTS) in AI", in Proc.
International Joint Conference on Neural Networks, pp. 1-8, Shengzhen,
China, July 18-22, 2021. PDF file
<http://www.cse.msu.edu/%7eweng/research/PSUTS-IJCNN2021rvsd-cite.pdf>.

Best regards,
-John





On Mon, Feb 14, 2022 at 11:16 PM Asim Roy <ASIM.ROY at asu.edu> wrote:

> Dear John,
>
>
>
>    1. On your statement that “Maximum likelihood: DN formulation that gives rise
>    to brain-like consciousness” – you mean a certain kind of mathematical
>    formulation can give rise to consciousness? I wish you were right. That
>    solves our consciousness problem and I don’t know why others are arguing
>    about it in a different Connectionists email chain. You should claim
>    this on that chain and get some feedback.
>
>
>
>    1. On your statement that “Do you mean the difference between maximum
>    likelihood and a specially defined minimization of a cost function is not a
>    whole lot?” – I have not studied this deeply, but did a quick search.
>    For some distributions, they can be equivalent. Here are a few blogs.
>    Again, I didn’t go through their mathematics.
>
>
>
> Why Squared Error Minimization = Maximum Likelihood Estimation | Abhimanyu
> (abhimanyu08.github.io)
> <https://abhimanyu08.github.io/blog/deep-learning/mathematics/2021/06/18/final.html>
>
> Linear Regression. A unification of Maximum Likelihood… | by William
> Fleshman | Towards Data Science
> <https://towardsdatascience.com/linear-regression-91eeae7d6a2e>
>
>
>
>
>
> But you consciousness claim is really an eye-opener. I didn’t know about
> it. You should claim it on the other Connectionists email chain.
>
>
>
> Best,
>
> Asim
>
>
>
> *From:* Juyang Weng <juyang.weng at gmail.com>
> *Sent:* Monday, February 14, 2022 7:26 AM
> *To:* Asim Roy <ASIM.ROY at asu.edu>
> *Cc:* Gary Marcus <gary.marcus at nyu.edu>; John K Tsotsos <
> tsotsos at cse.yorku.ca>
> *Subject:* Re: Connectionists: Stephen Hanson in conversation with Geoff
> Hinton
>
>
>
> Dear Asim,
>
> Do you mean the difference between maximum likelihood and a specially
> defined minimization of a cost function is not a whole lot?
>
> Maximum likelihood: DN formulation that gives rise to brain-like
> consciousness.
>
> Deep Learning: minimize an error rate with supervised class labels.
>
> John
>
>
>
> On Sun, Feb 13, 2022 at 6:17 PM Asim Roy <ASIM.ROY at asu.edu> wrote:
>
> John,
>
>
>
> I don’t think this needs a response. There are some difference, but I
> don’t think a whole lot.
>
>
>
> Asim
>
>
>
> *From:* Juyang Weng <juyang.weng at gmail.com>
> *Sent:* Sunday, February 13, 2022 1:22 PM
> *To:* Asim Roy <ASIM.ROY at asu.edu>; Gary Marcus <gary.marcus at nyu.edu>
> *Cc:* John K Tsotsos <tsotsos at cse.yorku.ca>
> *Subject:* Re: Connectionists: Stephen Hanson in conversation with Geoff
> Hinton
>
>
>
> Dear Aim:
> You wrote "If I understand correctly, all learning systems do something
> along the lines of maximum likelihood learning or error minimization, like
> your DN. What?s your point?"
>
> "Maximum likelihood learning" (ML) is EXTREMELY different from "error
> minimization" (EM) like what Geoff Hinton's group did.
>
> ML incrementally estimates the best solution from the distribution of a
> huge number of parameters such as weights, agees, connections patterns etc.
> conditioned on (I) Incremental Learning, (II) a learning experience, and
> (III) a limited computations resource, such as the number of neurons.
>
>
>
> EM (like what Geoff Hinton's group did) only finds a luckiest network from
> multiple trained networks without condition (III) above.  All such trained
> networks do not estimate the distribution of a huge number of parameters.
> Thus, they are all local minima, actually very bad local minima because
> their error-backprop does not have competition as I explained in my YouTube
> talk:
> BMTalk 3D Episode 6: Did Turing Awards Go to Fraud?
> https://youtu.be/Rz6CFlKrx2k
> <https://urldefense.com/v3/__https:/youtu.be/Rz6CFlKrx2k__;!!IKRxdwAv5BmarQ!I1p-4pKL2zAIFM9Pt4wDY6nQg045EGaYn4qK8faSR_4QYqnxCGifNuDBlUKidiY$>
>
>
>
> I am writing a paper in which I have proved that without condition (III)
> above, a special nearest neighbor classifier I designed can give any
> non-zero verification error rate and any non-zero test error rate!
>
>
>
> Best regards,
>
> -John
>
>
>
> --
>
> Juyang (John) Weng
>
>
>
>
> --
>
> Juyang (John) Weng
>


-- 
Juyang (John) Weng
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220215/ab7f367c/attachment.html>


More information about the Connectionists mailing list