Connectionists: short Op-ed to address AI problems

Weng, Juyang weng at msu.edu
Wed Jun 5 10:58:52 EDT 2024


Dear Gary,
    You wrote, "you must have missed DeepMind’s neurosymbolic AlphaGeometry paper, in Nature, with its state of the art results, beating pure neural nets."   Based on this statement, I am afraid that you do not have algorithmic expertise to understand my Post-Selection misconduct allegation against Alphabet's all AI projects so far:
    It is my ethical duty to inform you that I alleged "Deep Learning" including “LLMs” like ChatGPT to be Post-Selection misconduct (cheating and hiding).  Read J. Weng, On "Deep Learning" Misconduct, ISAIC 2022 and also https://ui.adsabs.harvard.edu/abs/2022arXiv221116350W/abstract
    If you train multiple systems each using different parameters, you need to report the average errors of all trained systems on a validation set.  Better, also report the minimum, 25%, 50%, 75%, and maximum of the ranked errors on the validation set.   If you only report the error of the luckiest few networks on the validation set, you grossly underestimate its error in a future new test. Namely, the luck on the validation set does not transfer to a similar luck on a new future test.
    Please see this Newsletter for the latest developments if there is such a future test:   https://www.cse.msu.edu/amdtc/amdnl/CDSNL-V18-N1.pdf
     If you are interested in discussing this important matter, I invite you to write an [AI Crisis] Dialogue.         See https://www.cse.msu.edu/amdtc/amdnl/CDSNL-V18-N2.pdf
    Best regards,
-John Weng
Brain-Mind Institute
________________________________
From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> on behalf of Gary Marcus <gary.marcus at nyu.edu>
Sent: Wednesday, June 5, 2024 8:41 AM
To: Stephen José Hanson <jose at rubic.rutgers.edu>
Cc: connectionists at mailman.srv.cs.cmu.edu <connectionists at mailman.srv.cs.cmu.edu>
Subject: Re: Connectionists: short Op-ed to address AI problems

Wow, Stephen, you have outdone yourself. This note is a startling mixture of rude, condescending, inaccurate, and uninformed. A work of art!

To correct four misunderstandings:
1. Yes, my essay was written before LLMs were popular (though around the time Transformers were proposed as it happens). It was however precisely “  a moonshot idea, that doesn't involve leaving the blackbox in the hands of corporate types who value profits over knowledge.” Please read what I wrote. It’s one page, linked below, and you obviously couldn’t be bothered,. (Parenthetically, I was one of the first people to warn that OpenAI was likely to be problematic,  and have done so repeatedly at my Substack.)
2. My argument throughout (back to 2012, in the New Yorker, 2018 in my Deep Learning: A Critical Appraisal, etc) has been that deep learning has some role but cannot solve all things, and that it would be not reliable on its own. In 2019 onwards I emphasized many of the social problems that arise from relying on such unreliable architectures. I have never wavered from any of that. (Again, please read my work before so grossly distorting it.) Unreliable systems that are blind to truth and values can cause harm (bias), be exploited (to create disinformation), etc. There is absolutely no contradiction there, as I have explained numerous times in my writings.
3. It’s truly rude to dismiss an entire field as “flotsam and jetsam”,  and you obviously aren’t following the neurosymbolic literature, e.g., you must have missed DeepMind’s neurosymbolic AlphaGeometry paper, in Nature, with its state of the art results, beating pure neural nets.
4. Again, nothing has changed about my view; your last remark is gratuitous and based on a misunderstanding.

Truly flabbergasted,
Gary

On Jun 5, 2024, at 05:18, Stephen José Hanson <jose at rubic.rutgers.edu> wrote:



Gary, this was before the LLM discovery.   Pierre is proposing a moonshot idea, that doesn't involve leaving the blackbox in the hands of corporate types who value profits over knowledge.  OPENAI seems to be flailing and having serious safety and security issues.  It certainly could be recipe for diaster.

Frankly your views have been all over the place.  DL doesn't work, DL could work but should be merged with the useless flotsam and jetsam from GOFAI over the last 50 years, and now they are too dangerous because they work but they are unreliable, like most humans.

Its hard to know what views of yours to take seriously as they seem change so rapidly.

Cheers

Stephen

On 6/4/24 9:53 AM, Gary Marcus wrote:
I would just point out that I first made this suggestion [CERN for AI] in the New York Times in 2017, and several others have since. There is some effort ongoing to try to make it happen, if you search you will see.

<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.nytimes.com_2017_07_29_opinion_sunday_artificial-2Dintelligence-2Dis-2Dstuck-2Dheres-2Dhow-2Dto-2Dmove-2Dit-2Dforward.html-3Funlocked-5Farticle-5Fcode-3D1.xE0.mcIz.lT-5FK7BZdonGJ-26smid-3Dnytcore-2Dios-2Dshare-26referringSource-3DarticleShare-26u2g-3Di-26sgrp-3Dc-2Dcb&d=DwMGaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=fwBsbQ5xjEJFDg3c0iXuOBcr84mxEGxR0cEG4-hstVM8dJNyq3HVvpCACElUGWT2&s=TGZDkK1TsB_rNyjmal5jG1694upjB2JDhtj3UOe4Cws&e=>
<30gray-facebookJumbo.jpg>
Opinion | Artificial Intelligence Is Stuck. Here’s How to Move It Forward. (Gift Article)<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.nytimes.com_2017_07_29_opinion_sunday_artificial-2Dintelligence-2Dis-2Dstuck-2Dheres-2Dhow-2Dto-2Dmove-2Dit-2Dforward.html-3Funlocked-5Farticle-5Fcode-3D1.xE0.mcIz.lT-5FK7BZdonGJ-26smid-3Dnytcore-2Dios-2Dshare-26referringSource-3DarticleShare-26u2g-3Di-26sgrp-3Dc-2Dcb&d=DwMGaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=fwBsbQ5xjEJFDg3c0iXuOBcr84mxEGxR0cEG4-hstVM8dJNyq3HVvpCACElUGWT2&s=TGZDkK1TsB_rNyjmal5jG1694upjB2JDhtj3UOe4Cws&e=>
nytimes.com<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.nytimes.com_2017_07_29_opinion_sunday_artificial-2Dintelligence-2Dis-2Dstuck-2Dheres-2Dhow-2Dto-2Dmove-2Dit-2Dforward.html-3Funlocked-5Farticle-5Fcode-3D1.xE0.mcIz.lT-5FK7BZdonGJ-26smid-3Dnytcore-2Dios-2Dshare-26referringSource-3DarticleShare-26u2g-3Di-26sgrp-3Dc-2Dcb&d=DwMGaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=fwBsbQ5xjEJFDg3c0iXuOBcr84mxEGxR0cEG4-hstVM8dJNyq3HVvpCACElUGWT2&s=TGZDkK1TsB_rNyjmal5jG1694upjB2JDhtj3UOe4Cws&e=>


On Jun 3, 2024, at 22:58, Baldi,Pierre <pfbaldi at ics.uci.edu><mailto:pfbaldi at ics.uci.edu> wrote:


I would appreciate feedback from this group,especially dissenting feedback,  on the attached Op-ed. You can send it to my personal email which you can find on my university web site if you prefer. The basic idea is simple:

IF for scientific, security, or other societal reasons we want academics to develop and study the most advanced forms of AI, I can see only one solution:  create  a national or international effort around the largest data/computing center on Earth with a CERN-like structure comprising permanent staff, and 1000s of affiliated academic laboratories. There are many obstacles, but none is completely insurmountable if we wanted to.

Thank you.

Pierre





<AI-CERN-Baldi2024FF.pdf>

--
Stephen José Hanson
Professor of Psychology
Director of RUBIC
Member of Exc Comm RUCCS
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20240605/043b0ad7/attachment.html>


More information about the Connectionists mailing list