Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate

Gary Marcus gary.marcus at nyu.edu
Tue Feb 13 17:20:54 EST 2024


those are fine keywords, but not real counterarguments with enough depth to have a conversation around, nor solutions to any of the specific problems I raised.

> On Feb 13, 2024, at 2:01 PM, Weng, Juyang <weng at msu.edu> wrote:
> 
> Dear Gary,
>     I read your posted articles, but by those shallow arguments in your posts, somebody may say that your arguments are as shallow and unconvincing as Geoffrey Hinton's.
>     For example, you wrote, "lack depth in their comprehension of complex syntax, negation, unusual circumstances, etc."  But they are shallow because human kids may not have them either.  
>     You overlooked the 5th aspect of the four aspects of learning (see my book Natural and Artificial Intelligence NAI): 
>     (1) Learning framework (e.g., incremental vs. batch or DP); 
>     (2) Sensors and effectors;
>     (3) Internal representations; 
>     (4) Computational resources; and 
>     (5) Learning experience.
>     Do the above 5 aspects give you more convincing reasons to criticize Deep Learning and LLMs? 
>     Best regards,
> -John Weng
>  
> On Tue, Feb 13, 2024 at 12:55 PM Gary Marcus <gary.marcus at nyu.edu <mailto:gary.marcus at nyu.edu>> wrote:
> Thanks for your question. Why should we think Generative AI systems lack understanding? Over the last couple weeks, since Hinton’s October critique of me surfaced, I have written five recent essays on the matter, filled with examples, each exploring a different facet of the issues, both in chatbots and image generation systems:
> https://open.substack.com/pub/garymarcus/p/deconstructing-geoffrey-hintons-weakest <https://urldefense.com/v3/__https://open.substack.com/pub/garymarcus/p/deconstructing-geoffrey-hintons-weakest?r=8tdk6&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true__;!!HXCxUKc!xZPADw4pnG9r-wfuwd5diCPzfbSwVpfhABTD-ScZS7CMnHJ_g6T1IZwrEvj-_HMTaVK1WUAdtjEXkVPKQco$>
> https://open.substack.com/pub/garymarcus/p/further-trouble-in-hinton-city <https://urldefense.com/v3/__https://open.substack.com/pub/garymarcus/p/further-trouble-in-hinton-city?r=8tdk6&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true__;!!HXCxUKc!xZPADw4pnG9r-wfuwd5diCPzfbSwVpfhABTD-ScZS7CMnHJ_g6T1IZwrEvj-_HMTaVK1WUAdtjEXH987njM$>
> https://open.substack.com/pub/garymarcus/p/there-must-be-some-misunderstanding <https://urldefense.com/v3/__https://open.substack.com/pub/garymarcus/p/there-must-be-some-misunderstanding?r=8tdk6&utm_campaign=post&utm_medium=web__;!!HXCxUKc!xZPADw4pnG9r-wfuwd5diCPzfbSwVpfhABTD-ScZS7CMnHJ_g6T1IZwrEvj-_HMTaVK1WUAdtjEXEBCTn4A$>
> https://open.substack.com/pub/garymarcus/p/horse-rides-astronaut-redux <https://urldefense.com/v3/__https://open.substack.com/pub/garymarcus/p/horse-rides-astronaut-redux?r=8tdk6&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true__;!!HXCxUKc!xZPADw4pnG9r-wfuwd5diCPzfbSwVpfhABTD-ScZS7CMnHJ_g6T1IZwrEvj-_HMTaVK1WUAdtjEX_jOlXqs$>
> https://open.substack.com/pub/garymarcus/p/statistics-versus-understanding-the <https://urldefense.com/v3/__https://open.substack.com/pub/garymarcus/p/statistics-versus-understanding-the?r=8tdk6&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true__;!!HXCxUKc!xZPADw4pnG9r-wfuwd5diCPzfbSwVpfhABTD-ScZS7CMnHJ_g6T1IZwrEvj-_HMTaVK1WUAdtjEXMj1-eCU$>
> A few sample visuals from those below (read the essays for sources). The argument in brief is that the systems do fine in statistically canonical situations, but lack depth in their comprehension of complex syntax, negation, unusual circumstances, etc. Much more context (and many more examples) in the essays themselves.
> 
> Gary
> 
> 
> From: Weng, Juyang
> Sent: Tuesday, February 13, 2024 12:36 PM
> To: gary.marcus at nyu.edu <mailto:gary.marcus at nyu.edu> <gary.marcus at nyu.edu <mailto:gary.marcus at nyu.edu>>
> Cc: connectionists at mailman.srv.cs.cmu.edu <mailto:connectionists at mailman.srv.cs.cmu.edu> <connectionists at mailman.srv.cs.cmu.edu <mailto:connectionists at mailman.srv.cs.cmu.edu>>
> Subject: Re: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate
>  
> Dear Gary, 
>     You wrote, "LLMs do not really understand what they are saying".
>     Those LLMs generated text in a natural language, didn't they?
>     Why do you say that LLMs do not understand such text?   
>     The truly understandable answer to this question is not as simple as you believe!  What you "believe" is not convincing and intuitive to many laymen and media!  
>     That is why Jeffery Hinton can simply give you potshots without valid analysis.
>     Best regards,
> -John Weng
> Brain-Mind Institute
>  
> On Tue, Feb 13, 2024 at 12:49 AM Gary Marcus <gary.marcus at nyu.edu <mailto:gary.marcus at nyu.edu>> wrote:
> Geoff Hinton recently asked:
> 
> Which part of this is a misrepresentation: Gary Marcus believes that LLMs do not really understand what they are saying and Gary Marcus also believes that they are an existential threat.
> 
> That’s an easy one. The second, regarding existential threat. I
> 
>   do believe that LLMs do not really understand what they are saying. But I do not believe that LLMs as such pose a (literally) existential threat, nor have I ever said such a thing, not in the Senate, not in my Substack, not here, and not anywhere else. (Anyone with evidence otherwise should step forward.)
> 
> I have in fact said the opposite; e.g., I have said that the human species is hard to extinguish, because we are genetically and geographic diverse, capable of building vaccines, etc.  E.g. in interview with AFP, posted at TechExplore  I said that I thought the extinction threat 'overblown', https://techxplore.com/news/2023-06-human-extinction-threat-overblown-ai.html <https://urldefense.com/v3/__https://techxplore.com/news/2023-06-human-extinction-threat-overblown-ai.html__;!!HXCxUKc!1aqYa3ceLXbEbMlwq37LKL7XidWbHppPrJ38-1eZiwfUUxqbMdizbH3eXbTmdYfzgAxFOpN3rN-a808jNKY$>.
> 
> My actual view is captured here , https://garymarcus.substack.com/p/ai-risk-agi-risk <https://urldefense.com/v3/__https://garymarcus.substack.com/p/ai-risk-agi-risk__;!!HXCxUKc!1aqYa3ceLXbEbMlwq37LKL7XidWbHppPrJ38-1eZiwfUUxqbMdizbH3eXbTmdYfzgAxFOpN3rN-aP5MUOq0$>. 
> although a lot of the literature equates artificial intelligence risk with the risk of superintelligence or artificial general intelligence, you don’t have to be superintelligent to create serious problems. I am not worried, immediately, about “AGI risk” (the risk of superintelligent machines beyond our control), in the near term I am worried about what I will call “MAI risk”—Mediocre AI that is unreliable (a la Bing and GPT-4) but widely deployed—both in terms of the sheer number of people using it, and in terms of the access that the software has to the world. ..
> Lots of ordinary humans, perhaps of above average intelligence but not necessarily genius-level, have created all kinds of problems throughout history; in many ways, the critical variable is not intelligence but power, which often caches out as access. In principle, a single idiot with the nuclear codes could destroy the world, with only a modest amount of intelligence and a surplus of ill-deserved access. …
> We need to stop worrying (just) about Skynet and robots taking over the world, and think a lot more about what criminals, including terrorists, might do with LLMs, and what, if anything, we might do to stop them.
> LLMs may well pose an existential threat to democracy, because (despite their limited capacity for understanding and fact checking) they are excellent mimics, and their inability to fact-check is to the advantage of bad actors that which to exploit them. 
> 
> But that doesn’t mean they are remotely likely to literally end the species. 
> 
> I hope that clarifies my view.
> Gary
> 
> 
> --
> Juyang (John) Weng

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20240213/4778743c/attachment.html>


More information about the Connectionists mailing list