<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><div>Geoff Hinton recently asked:</div><div dir="ltr"></div><div><br></div><div><blockquote type="cite"><span style="-webkit-text-size-adjust: auto; background-color: rgb(255, 255, 255);">Which part of this is a misrepresentation: Gary Marcus believes that LLMs do not really understand what they are saying and Gary Marcus also believes that they are an existential threat.</span></blockquote><br></div><div>That’s an easy one. The second, regarding existential threat. I</div><div><br></div><div> <i>do</i> believe that LLMs do not really understand what they are saying. But I do <i>not</i> believe that LLMs as such pose a (literally) existential threat, nor have I ever said such a thing, not in the Senate, not in my Substack, not here, and not anywhere else. (Anyone with evidence otherwise should step forward.)</div><div><br></div><div>I <i>have</i> in fact said the opposite; e.g., I have said that the human species is hard to extinguish, because we are genetically and geographic diverse, capable of building vaccines, etc. E.g. in interview with AFP, posted at TechExplore I said that I thought the extinction threat 'overblown', <a href="https://techxplore.com/news/2023-06-human-extinction-threat-overblown-ai.html">https://techxplore.com/news/2023-06-human-extinction-threat-overblown-ai.html</a>.</div><div><br></div><div>My actual view is captured here , <a href="https://garymarcus.substack.com/p/ai-risk-agi-risk">https://garymarcus.substack.com/p/ai-risk-agi-risk</a>. </div><div><p data-pm-slice="1 1 []" style="-webkit-text-size-adjust: auto;"><i>although a lot of the literature equates artificial intelligence risk with the risk of superintelligence or artificial general intelligence, you don’t have to be superintelligent to create serious problems. I am not worried, immediately, about “AGI risk” (the risk of superintelligent machines beyond our control), in the near term I am worried about what I will call “MAI risk”—Mediocre AI that is unreliable (a la Bing and GPT-4) but widely deployed—both in terms of the sheer number of people using it, and in terms of the access that the software has to the world. ..</i></p><p><i>Lots of ordinary humans, perhaps of above average intelligence but not necessarily genius-level, have created all kinds of problems throughout history; in many ways, the critical variable is not intelligence but power, which often caches out as access. In principle, a single idiot with the nuclear codes could destroy the world, with only a modest amount of intelligence and a surplus of ill-deserved access. </i><i>…</i></p><p><span style="-webkit-text-size-adjust: auto;"><i>We need to stop worrying (just) about Skynet and robots taking over the world, and think a lot more about what criminals, including terrorists, might do with LLMs, and what, if anything, we might do to stop them.</i></span></p></div><div>LLMs may well pose an existential threat to democracy, because (despite their limited capacity for understanding and fact checking) they are excellent mimics, and their inability to fact-check is to the advantage of bad actors that which to exploit them. </div><div><br></div><div>But that doesn’t mean they are remotely likely to literally end the species. </div><div><br><div>I hope that clarifies my view.</div><div>Gary</div></div></body></html>