Connectionists: ChatGPT’s “understanding” of maps and infographics

David H Kirshner dkirsh at lsu.edu
Thu Feb 15 11:17:21 EST 2024


Two questions often get intermixed in such discussions:

·         Are neural nets intelligent?

·         Do we relate to neural nets as intelligent?
This confusion is embedded in the history of AI as the Turing Test uses the latter question to answer the former.

Back in Alan Turing’s time when AI meant symbolic AI, there might have been some justification to interrelating the questions. For a symbolic processor to act intelligently someone would have had to figure out in conceptual terms what is intelligent interaction. The symbolic AI program could then be said to have intelligence encoded into it. Clearly, that is not true for connectionist simulation of intelligence.

The second question seems to me the more interesting of the two because it reveals something to us about our nature as social beings. It amazed me back in the days of Eliza that people would spend hours interacting with the very rudimentary language response interface—I suspected then that the Turing Test was way too easy. But whether we take an agent to be intelligent/human is not purely up to us. Our own neural networks have evolved to make discriminations between human and non-human in a very simplified stimulus environment. Of course, we can tell ourselves not to ascribe human qualities to machines, but this is after the fact. Our cognitive system already has responded to the AI as human. The question is what happens from here. Do we evolve more discriminatory response patterns based on the new and evolving stimulus environment, or does AI remain human-by-default? This is a chapter of human social history that has not yet been written.

David Kirshner

From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> On Behalf Of Gary Marcus
Sent: Thursday, February 15, 2024 9:20 AM
To: Iam Palatnik <iam.palat at gmail.com>
Cc: connectionists at mailman.srv.cs.cmu.edu
Subject: Re: Connectionists: ChatGPT’s “understanding” of maps and infographics

You don't often get email from gary.marcus at nyu.edu<mailto:gary.marcus at nyu.edu>. Learn why this is important<https://aka.ms/LearnAboutSenderIdentification>
Selectively looking at a single example (which happens to involve images) and ignoring all the other language-internal failures that I and others have presented is not a particularly effective way of getting to a general truth.

More broadly, you are, in my judgement, mistaking correlation for a deeper level of understanding.

Gary


On Feb 15, 2024, at 07:05, Iam Palatnik <iam.palat at gmail.com<mailto:iam.palat at gmail.com>> wrote:

Dear all,

yrnlcruet ouy aer diergna na txraegadeeg xalemep arpagaprh tcgnnoaini an iuonisntrtc tub eht estetrl hntiwi aehc etmr rea sbcaedrml od ont seu nay cedo adn yimlsp ucmanlsrbe shti lynaalmu ocen ouy musrncbea htis orvpe htta oyu cloedtmep hte tska by llayerlti ooifwlgln this citnotsirun taets itcyxellpi that oyu uderdnoost eht gsaninesmt

Copy pasting just the above paragraph onto GPT-4 should show the kind of behavior that makes some researchers say LLMs understand something, in some form.
We already use words such as 'intelligence' in AI and 'learning' in ML. This is not to say it's the same as human intelligence/learning. It is to say it's a similar enough behavior that the same word fits, while specifically qualifying the machine word-counterpart as something different (artificial/machine).

Can this debate be solved by coining a concept such as 'artificial/machine understanding'? GPT-4 then 'machine understands' the paragraph above. It 'machine understands' arbitrary scrambled text better than humans 'human understand' it. Matrix multiplying rotational semantic embeddings of byte pair encoded tokens is part of 'machine understanding' but not of 'human understanding'. At the same time, there are plenty of examples of things we 'human understand' and GPT-4 doesn't 'machine understand', or doesn't understand without tool access and self reflective prompts.

As to the map generation example, there are multiple tasks overlaid there. The language component of GPT-4 seems to have 'machine understood' it has to generate an image, and what the contents of the image have to be. It understood what tool it has to call to create the image. The tool generated an infograph style map of the correct country, but the states and landmarks are wrong. The markers are on the wrong cities and some of the drawings are bad. Is it too far fetched to say GPT-4 'machine understood' the assignment (generating a map with markers in the style of infograph), but its image generation component (Dall-E) is bad at detailed accurate geography knowledge?

I'm also confused why the linguistic understanding capabilities of GPT-4 are being tested by asking Dall-E 3 to generate images. Aren't these two completely separate models, and GPT-4 just function-calls Dall-E3 for image generation? Isn't this actually a sign GPT-4 did its job by 'machine understanding' what the user wanted, making the correct function call, creating and sending the correct prompt to Dall-E 3, but Dall-E 3 fumbled it because it's not good at generating detailed accurate maps?

Cheers,

Iam

On Thu, Feb 15, 2024 at 5:20 AM Gary Marcus <gary.marcus at nyu.edu<mailto:gary.marcus at nyu.edu>> wrote:
I am having a genuinely hard time comprehending some of the claims recently made in this forum. (Not one of which engaged with any of the specific examples or texts I linked.)

Here’s yet another example, a dialog about geography that was just sent to me by entrepreneur Phil Libin. Do we really want to call outputs like these (to two prompts, with three generated responses zoomed in below) understanding?

In what sense do these responses exemplify the word “understanding”?

I am genuinely baffled. To me a better word would be “approximations”, and poor approximations at that.

Worse, I don’t see any AI system on the horizon that could reliably do better, across a broad range of related questions. If these kinds of outputs are any indication at all, we are still a very long away from reliable general-purpose AI.

Gary




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20240215/81af18b3/attachment.html>


More information about the Connectionists mailing list