Connectionists: Chomsky's apple

Christopher Warren chris.warren at usu.edu
Wed Mar 22 22:59:31 EDT 2023


I really don't want to interject myself into this conversation - it's an incredible thrill to witness it at all. But this output from Chat GPT-4 seems a heck of a lot like creativity. It's also just amusing.

ME: could you write me a sea shanty but in the style of the rapper Eminem?

(Verse 1) Yo, we're sailin' on the high seas, ain't nothin' gonna stop us, Wave after wave, we're rippin' through like a monstrous, Beast from the deep, no sleep, we're always movin', Cuttin' through the waters, our rhythm keeps improvin'.

(Chorus) Heave-ho, heave-ho, we're sailin' on this beat, Ridin' on the tide, ain't nothin' gonna knock us off our feet, Heave-ho, heave-ho, we're masters of the sea, Rappin' to the rhythm, just my crew and me.

(Verse 2) Navigatin' the ocean, our compass never lies, Risin' to the challenge, under stormy skies, We spit rhymes like a cannonball, ain't nobody can compete, Our words like anchors, holdin' fast, ain't no defeat.

(Chorus) Heave-ho, heave-ho, we're sailin' on this beat, Ridin' on the tide, ain't nothin' gonna knock us off our feet, Heave-ho, heave-ho, we're masters of the sea, Rappin' to the rhythm, just my crew and me.

(Bridge) Through the darkest nights, the moon, our guide, we follow, Our lyrics sharp like a sword, our hearts never hollow, Together we stand, a brotherhood, can't be broken, The stories of our journey forever remain unspoken.

(Verse 3) Sailin' to the horizon, where the sky meets the sea, The world in our hands, we're breakin' free, This shanty of survival, we spit it like a prayer, No storm too strong, we face it with a glare.

(Chorus) Heave-ho, heave-ho, we're sailin' on this beat, Ridin' on the tide, ain't nothin' gonna knock us off our feet, Heave-ho, heave-ho, we're masters of the sea, Rappin' to the rhythm, just my crew and me.

(Outro) As we drop the anchor, back on solid ground, The echoes of our shanty, a testament, resound, We've conquered every wave, our spirits never bend, This ain't the end, my friend, just a moment to transcend.



Dr. Christopher M. Warren
Department of Psychology, EDUC490
Emma Eccles Jones College of Education and Human Services
2810 Old Main Hill
Utah State University
Logan, UT 84322
chris.warren at usu.edu
cell: 435-265-5704

________________________________
From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> on behalf of Gary Marcus <gary.marcus at nyu.edu>
Sent: Tuesday, March 21, 2023 2:45 AM
To: Dietterich, Thomas <tgd at oregonstate.edu>
Cc: connectionists at mailman.srv.cs.cmu.edu <connectionists at mailman.srv.cs.cmu.edu>
Subject: Re: Connectionists: Chomsky's apple

While I don’t find Tom’s very broad definition of understanding to be useful (eg since it must attribute understanding to a broken clock twice a day), I do think that his second paragraph gets to the heart of the matter. If a “system can correctly handle many questions/contexts, [yet] fails on very similar questions/contexts”, I find it hard to attribute deep understanding. This is the situation with LLMs.

(I sometimes describe this kind of thing as “pointillistic”; success around some specific cloud of observed points, without the kind of uniform abstraction we might hope for).

I also concur that causal models (quite possibly of a form not yet invented) might be immensely helpful in this regard.

But I don’t think we should expect to find the causal models by “interpreting” LLMs. Behaviorally, they are inherently stochastic, unreliable and heavily data-bound. Their persistent challenges around distribution shift suggest that they are not deriving reliable abstractions at the a sufficiently deep grain level.

Hence, in my view, the need for a new paradigm.

Gary

On Mar 21, 2023, at 07:21, Dietterich, Thomas <tgd at oregonstate.edu> wrote:



As I have written elsewhere (https://medium.com/@tdietterich/what-does-it-mean-for-a-machine-to-understand-555485f3ad40<https://urldefense.proofpoint.com/v2/url?u=https-3A__medium.com_-40tdietterich_what-2Ddoes-2Dit-2Dmean-2Dfor-2Da-2Dmachine-2Dto-2Dunderstand-2D555485f3ad40&d=DwMFAg&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=B7gLyIjAKjLD3LN3bJmdbPcymAxGU5_PGUk0V4Gd5vobXskEY4wrRz4LAFfBB0wm&s=531_lXl0RVBkAWlO6_xaM47oQ0NEHI8gUD-g36-BUnY&e=>), I advocate a purely functional definition of “understanding”. Namely, that a system “understands” something if it “does the right thing” in response. It understands a hammer if it uses the hammer in appropriate contexts; it understands a question if it provides an appropriate answer.



However, a shortcoming of my Medium post is that it only discusses what we might call “point-wise understanding”---providing appropriate responses to individual queries. It doesn’t discuss “systematic understanding”, where the AI system is capable of providing appropriate responses across an entire range of queries or situations. When people complain that an AI system doesn’t “truly” understand, I think they are often saying that while the system can correctly handle many questions/contexts, it fails on very similar questions/contexts. Such a system can’t be trusted to produce the right behavior, in general. An attractive aspect of causal models is that they (usually) provide this kind of systematic understanding.



As many readers of this list have pointed out, it is difficult to establish the extent to which a system exhibits systematic understanding. Obviously any one failure to behave correctly demonstrates a lack of systematic understanding, but without access to the causal structure of the system’s internal processing, it is hard to establish the range over which the system will behave systematically. Even with access to the weights of these large language models, this is challenging. I’m excited to see advances in experimental strategies for understanding the behavior of these nets. Here are a couple of recent papers that I like:



Finding Alignments Between Interpretable Causal Variables and Distributed Neural Representations,

Atticus Geiger, Zhengxuan Wu, Christopher Potts, Thomas Icard, Noah D. Goodman. arxiv 2303.02536



Amnesic Probing: Behavioral Explanation with Amnesic Counterfactuals.

Yanai Elazar, Shauli Ravfogel, Alon Jacovi, Yoav Goldberg

https://aclanthology.org/2021.tacl-1.10/<https://urldefense.proofpoint.com/v2/url?u=https-3A__aclanthology.org_2021.tacl-2D1.10_&d=DwQFAg&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=B7gLyIjAKjLD3LN3bJmdbPcymAxGU5_PGUk0V4Gd5vobXskEY4wrRz4LAFfBB0wm&s=pu6svB0DwmOB8bozxiZMzbHov_mov6bPa_KhuJqWAVE&e=>

--Tom





Thomas G. Dietterich, Distinguished Professor Voice: 541-737-5559

School of Electrical Engineering              FAX: 541-737-1300

  and Computer Science                        URL: eecs.oregonstate.edu/~tgd

US Mail: 1148 Kelley Engineering Center

Office: 2067 Kelley Engineering Center

Oregon State Univ., Corvallis, OR 97331-5501



From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> On Behalf Of Anand Ramamoorthy
Sent: Monday, March 20, 2023 02:42
To: connectionists at mailman.srv.cs.cmu.edu
Subject: Re: Connectionists: Chomsky's apple



[This email originated from outside of OSU. Use caution with links and attachments.]

[This email originated from outside of OSU. Use caution with links and attachments.]

Hi All,

                   This is an interesting/entertaining discussion. "Understanding" has always been a somewhat nebulous concept. In the late 90s, Roger Penrose held (and continues to hold, if I am not mistaken), that at least in terms of mathematical "understanding", such a phenomenon couldn't possibly be captured by an effective procedure. I was sympathetic to this view in my early academic life but currently believe my old self was likely wrong :)

 With advanced generative models mucking about now, "understanding" is a more contentious (and less purely academic) topic now than it may have been decades ago.



Some things I have been thinking about recently:



1. We all understand things to varying degrees, and know of ways to improve said understanding. It is possible for us to understand something more precisely or deeply with experience or due diligence (zooming out, this reflects humanity's intellectual trajectory as a species...unless people believe there was a magical time when the ancients knew it all etc). In so far that human understanding (individual, collective and from a historical perspective), is a phenomenon that is marked by change, incremental as well as more dramatic  (perhaps someone has modelled this as an SOC instance a la Bak & Sneppen's model of evolution or the original BTW?), is it not reasonable to expect attempts to capture aspects of human intelligence in machines to have a similar characteristic? In other words, ChatGPT's "understanding" may be rudimentary as opposed to nonexistent?

Looking at the counterexamples, I am struck by how we could do the same with humans on a range of topics/issues and demonstrate/claim understanding or the lack thereof.

Our (mis)understandings define our brief lives.



2. Unless one embraces some sort of irreducibility argument I do not see why what humans can do cannot be captured by an artificial learning system.



3. Would it help to speak of "understanding" as not just having useful internal representations but a capacity for "representational parsimony"? This of course is intimately connected to generation of "insights" and getting at the causal structure of the world.



4. Given 1-3 above, how do we a) define understanding ? (yeah, very original, I know!), b) diagnose it/disambiguate it from behaviours that resemble it?



Live Long and Prosper



P.S: Regardless of what you make of my understanding or lack thereof, the contents of this email were generated by a human (moi) typing on a keyboard that is slightly worse for the wear :)



Anand Ramamoorthy







On Saturday, 18 March 2023 at 17:17:37 GMT, Kagan Tumer <kagan.tumer at oregonstate.edu<mailto:kagan.tumer at oregonstate.edu>> wrote:





I'm very reluctant to use the word "understand" beyond perhaps ChatGPT
understanding the structure of language (statistically). Here's an
example of a ChatGPT response to a simple arithmetic operation where:

1- chatGPT was wrong;
2- worked out an example that showed it was wrong, but it didn't
register that and double down on its wrong conclusion;
3- gave a high level explanation (also wrong) of why it was right even
though it was wrong.

you can forgive 1, but 2 and 3 clearly show ChatGPT does not actually
understand what it is saying.

Kagan



On 3/14/23 9:54 AM, Natsuki Oka wrote:
> [This email originated from outside of OSU. Use caution with links and
> attachments.]
>
> Judging from the responses below, ChatGPT understands counting and
> sorting to the extent that it can write a correct Python program but
> does not have the understanding to bring the proper execution results.
> count_and_sort.png

> Here's the program that ChatGPT created:
> ---
> sentences = [
>      "The quick brown fox jumps over the lazy dog",
>      "Python is a popular programming language",
>      "I like to eat pizza for dinner",
>      "The capital of France is Paris",
>      "The cat in the hat wears a red and white striped hat",
>      "My favorite color is blue",
>      "The United States has fifty states",
>      "There are seven days in a week",
>      "I enjoy listening to music while I work",
>      "Mount Everest is the highest mountain in the world"
> ]
>
> # sort the sentences by number of words
> sentences.sort(key=lambda x: len(x.split()))
>
> # print the sorted sentences with the number of words in parentheses
> for sentence in sentences:
>      num_words = len(sentence.split())
>      print(f"{sentence} ({num_words})")
> ---
>
> The execution of this program yields the following correct results:
> ---
> My favorite color is blue (5)
> Python is a popular programming language (6)
> The capital of France is Paris (6)
> The United States has fifty states (6)
> I like to eat pizza for dinner (7)
> There are seven days in a week (7)
> I enjoy listening to music while I work (8)
> The quick brown fox jumps over the lazy dog (9)
> Mount Everest is the highest mountain in the world (9)
> The cat in the hat wears a red and white striped hat (12)
> ---
>
> Oka Natsuki
> Miyazaki Sangyo-keiei University

>


--
Kagan Tumer
Director, Collaborative Robotics and Intelligent Systems Institute
Professor, School of MIME
Oregon State University
http://engr.oregonstate.edu/~ktumer<https://urldefense.proofpoint.com/v2/url?u=https-3A__nam04.safelinks.protection.outlook.com_-3Furl-3Dhttp-253A-252F-252Fengr.oregonstate.edu-252F-7Ektumer-26data-3D05-257C01-257Ctgd-2540oregonstate.edu-257C1fb6744b896148228bfd08db29330d6a-257Cce6d05e13c5e4d6287a84c4a2713c113-257C0-257C0-257C638149071462198508-257CUnknown-257CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0-253D-257C3000-257C-257C-257C-26sdata-3DW3W687w0DP9om-252F3ey4jDv-252F67sSrZ1mwQeESFGN9ajJM-253D-26reserved-3D0&d=DwMFAg&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=B7gLyIjAKjLD3LN3bJmdbPcymAxGU5_PGUk0V4Gd5vobXskEY4wrRz4LAFfBB0wm&s=ln1yfMP_wJ1glGZOkPKQEKx_hWrNmQ4eZA380dJpCZ8&e=>
https://kagantumer.com<https://urldefense.proofpoint.com/v2/url?u=https-3A__nam04.safelinks.protection.outlook.com_-3Furl-3Dhttps-253A-252F-252Fkagantumer.com-252F-26data-3D05-257C01-257Ctgd-2540oregonstate.edu-257C1fb6744b896148228bfd08db29330d6a-257Cce6d05e13c5e4d6287a84c4a2713c113-257C0-257C0-257C638149071462354750-257CUnknown-257CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0-253D-257C3000-257C-257C-257C-26sdata-3D5wlMHKExXzlOmW2LrczLwdZWFyy7tM-252F-252BxjflVj-252FxqYQ-253D-26reserved-3D0&d=DwMFAg&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=B7gLyIjAKjLD3LN3bJmdbPcymAxGU5_PGUk0V4Gd5vobXskEY4wrRz4LAFfBB0wm&s=dJXsxzsMu1lvNnoPmGy0FhfNmTqtmPKbr43-CW7taKA&e=>


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230323/bbe84700/attachment.html>


More information about the Connectionists mailing list