Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate

Grossberg, Stephen steve at bu.edu
Wed Feb 14 16:34:06 EST 2024


Dear All,

Perhaps some of you might find the following recent article relevant to the discussions in this email thread:

Grossberg, S. (2023). How children learn to understand language meanings: A neural model of adult–child multimodal interactions in real-time. Frontiers in Psychology, August 2, 2023. Section on Cognitive Science, Volume 14.
https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2023.1216479/full

Best,

Steve Grossberg
sites.bu.edu/steveg

From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> on behalf of Mitsu Hadeishi <mitsu at well.com>
Date: Wednesday, February 14, 2024 at 5:07 AM
To: travelsummer2006 at yahoo.com <travelsummer2006 at yahoo.com>
Cc: connectionists at mailman.srv.cs.cmu.edu <connectionists at mailman.srv.cs.cmu.edu>
Subject: Re: Connectionists: An open letter to Geoffrey Hinton: A call for civilized, moderated debate
Text is generated by the real world, and therefore one can infer structures of the real world simply by the structural relationships within the text itself. Even our human experience of the world is heavily mediated - no one has "direct" experience of "referents", it is all mediate by casual relationships which we assume preserve meaningful structure from the world which allows our experiences and thoughts to be useful. But that is still mediated and indirect as it has to be.

The upshot is that it is obvious that LLMs may have a better ability to form conceptual models of time than of space (since they operate more directly on time series of symbols and their concepts of space have to be inferred from far less data) - we have experiments showing this is in fact the case (LLM temporal reasoning is much better than their spatial reasoning).

But the fact these concepts are imperfect is not a good argument in favor of the idea they have no "understanding" whatsoever. For instance, an LLM can correctly "interpret" English language descriptions of code transformations in coding problems and refactor code, which involves conceptual mapping far beyond mere parroting of preexisting training data: the space of possible code transforms is exponentially larger than training sets. Even given the fact that LLMs are not perfect at this, if they had zero conceptual understanding one would predict they could only do coding problems that mimicked samples they had ingested during training, not apply generalized principles to generate code even somewhat correctly.

Another bit of evidence for some form of understanding is you can combine LLMs with other modalities and tools and they are able to "use" these new modalities correctly. If the abstractions in LLMs had some relationship to what we might call "understanding", even if it is very limited, one would predict that they could be extended and used in this way, adding multimodal capabilities and so on. And that is what we observe.

On Tue, Feb 13, 2024 at 21:56 gabriele.scheler at yahoo.com<mailto:gabriele.scheler at yahoo.com> <gabriele.scheler at yahoo.com<mailto:gabriele.scheler at yahoo.com>> wrote:

Simply put, LLMs represent words by their contexts. They have no referents for the words. That is not understanding. You can follow many tests people have performed to show that LLMs can reproduce, but make errors from a lack of knowing about the referential meaning of pieces of text.  They mimic understanding, aka known as statistical parrot.

On Tuesday, February 13, 2024 at 07:15:07 PM GMT+1, Weng, Juyang <weng at msu.edu<mailto:weng at msu.edu>> wrote:


Dear Gary,
    You wrote, "LLMs do not really understand what they are saying".
    Those LLMs generated text in a natural language, didn't they?
    Why do you say that LLMs do not understand such text?
    The truly understandable answer to this question is not as simple as you believe!  What you "believe" is not convincing and intuitive to many laymen and media!
    That is why Jeffery Hinton can simply give you potshots without valid analysis.
    Best regards,
-John Weng
Brain-Mind Institute

On Tue, Feb 13, 2024 at 12:49 AM Gary Marcus <gary.marcus at nyu.edu<mailto:gary.marcus at nyu.edu>> wrote:
Geoff Hinton recently asked:

Which part of this is a misrepresentation: Gary Marcus believes that LLMs do not really understand what they are saying and Gary Marcus also believes that they are an existential threat.

That’s an easy one. The second, regarding existential threat. I

  do believe that LLMs do not really understand what they are saying. But I do not believe that LLMs as such pose a (literally) existential threat, nor have I ever said such a thing, not in the Senate, not in my Substack, not here, and not anywhere else. (Anyone with evidence otherwise should step forward.)

I have in fact said the opposite; e.g., I have said that the human species is hard to extinguish, because we are genetically and geographic diverse, capable of building vaccines, etc.  E.g. in interview with AFP, posted at TechExplore  I said that I thought the extinction threat 'overblown', https:// techxplore.com/news/2023-06- human-extinction-threat- overblown-ai.html<https://urldefense.com/v3/__https:/techxplore.com/news/2023-06-human-extinction-threat-overblown-ai.html__;!!HXCxUKc!1aqYa3ceLXbEbMlwq37LKL7XidWbHppPrJ38-1eZiwfUUxqbMdizbH3eXbTmdYfzgAxFOpN3rN-a808jNKY$>.

My actual view is captured here , https://garymarcus.substack. com/p/ai-risk-agi-risk<https://urldefense.com/v3/__https:/garymarcus.substack.com/p/ai-risk-agi-risk__;!!HXCxUKc!1aqYa3ceLXbEbMlwq37LKL7XidWbHppPrJ38-1eZiwfUUxqbMdizbH3eXbTmdYfzgAxFOpN3rN-aP5MUOq0$>.

although a lot of the literature equates artificial intelligence risk with the risk of superintelligence or artificial general intelligence, you don’t have to be superintelligent to create serious problems. I am not worried, immediately, about “AGI risk” (the risk of superintelligent machines beyond our control), in the near term I am worried about what I will call “MAI risk”—Mediocre AI that is unreliable (a la Bing and GPT-4) but widely deployed—both in terms of the sheer number of people using it, and in terms of the access that the software has to the world. ..

Lots of ordinary humans, perhaps of above average intelligence but not necessarily genius-level, have created all kinds of problems throughout history; in many ways, the critical variable is not intelligence but power, which often caches out as access. In principle, a single idiot with the nuclear codes could destroy the world, with only a modest amount of intelligence and a surplus of ill-deserved access. …

We need to stop worrying (just) about Skynet and robots taking over the world, and think a lot more about what criminals, including terrorists, might do with LLMs, and what, if anything, we might do to stop them.
LLMs may well pose an existential threat to democracy, because (despite their limited capacity for understanding and fact checking) they are excellent mimics, and their inability to fact-check is to the advantage of bad actors that which to exploit them.

But that doesn’t mean they are remotely likely to literally end the species.

I hope that clarifies my view.
Gary


--
Juyang (John) Weng
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20240214/e4d8b0de/attachment-0001.html>


More information about the Connectionists mailing list