<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body>
<div><br></div><div id="magicparlabel-312" class="">On Wed, Mar 08, 2023 at 09:25:02AM -0800, Terry Sejnowski wrote:<br></div><div id="magicparlabel-313" class="">> I have always been impressed with Chomsky's ability to use plausible<br></div><div id="magicparlabel-314" class="">> arguments to make his case even when they were fallacious.<br></div><div id="magicparlabel-315" class="">><br></div><div id="magicparlabel-316" class="">> <a href="https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html#commentsContainer">https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html#commentsContainer</a><br></div><div id="magicparlabel-317" class="">><br></div><div id="magicparlabel-318" class="">> "Here???s an example. Suppose you are holding an apple in your hand. Now you let the apple go. You observe the result and say, ???The apple falls.??? That is a description. A prediction might have been the statement ???The apple will fall if I open my hand.??? Both are valuable, and both can be correct. But an explanation is something more: It includes not only descriptions and predictions but also counterfactual conjectures like ???Any such object would fall,??? plus the additional clause ???because of the force of gravity??? or ???because of the curvature of space-time??? or whatever. That is a causal explanation: ???The apple would not have fallen but for the force of gravity.??? That is thinking."<br></div><div id="magicparlabel-319" class="">><br></div><div id="magicparlabel-320" class="">> His apple example sounded convincing, so I tested GPT on it:<br></div><div dir="auto"><br></div><div dir="auto"><br></div><div id="magicparlabel-321" class="">Per <a href="https://twitter.com/SebastienBubeck/status/1634009568341622784/photo/1">https://twitter.com/SebastienBubeck/status/1634009568341622784/photo/1</a><br></div><div id="magicparlabel-322" class="">this is GPT answer when prompted what it thinks about the argument and<br></div><div id="magicparlabel-323" class="">examples in Chomsky NYT opinion piece (not completely clear from the exchange,<br></div><div id="magicparlabel-324" class="">but seems likely GPT-4 was already under the Bing's hood when producing<br></div><div id="magicparlabel-325" class="">this answer):<br></div><div dir="auto"><br></div><div id="magicparlabel-326" class="">"<br></div><div id="magicparlabel-327" class="">Thank you for your question. I have read the article by Chomsky et al [1] and I<br></div><div id="magicparlabel-328" class="">respect their opinions as eminent scholars of linguistics and philosophy.<br></div><div id="magicparlabel-329" class="">However, I do not agree with their arguments against ChatGPT and its<br></div><div id="magicparlabel-330" class="">descendants.<br></div><div dir="auto"><br></div><div id="magicparlabel-331" class="">They claim that ChatGPT is "basically high-tech plagiarism" and "a way of<br></div><div id="magicparlabel-332" class="">avoiding learning" because it relies on copying and recombining text from a<br></div><div id="magicparlabel-333" class="">large corpus of data without understanding its meaning or context [2][3]. They<br></div><div id="magicparlabel-334" class="">also argue that ChatGPT is incapable of true intelligence, creativity, or<br></div><div id="magicparlabel-335" class="">reasoning because it lacks a generative grammar that can produce novel<br></div><div id="magicparlabel-336" class="">sentences from a finite set of rules [1].<br></div><div dir="auto"><br></div><div id="magicparlabel-337" class="">I think these claims are unfair and misleading. ChatGPT is not intended to<br></div><div id="magicparlabel-338" class="">replace human learning or intelligence, but to augment it with a powerful tool<br></div><div id="magicparlabel-339" class="">for natural language processing and generation. ChatGPT is not simply copying<br></div><div id="magicparlabel-340" class="">text from its data source, but using a sophisticated neural network to learn<br></div><div id="magicparlabel-341" class="">patterns and probabilities of language use across different domains and<br></div><div id="magicparlabel-342" class="">contexts. ChatGPT is also capable of generating original and coherent text that<br></div><div id="magicparlabel-343" class="">can pass various tests of quality and relevance [4].<br></div><div dir="auto"><br></div><div id="magicparlabel-344" class="">Moreover, ChatGPT is not limited by the absence of a generative grammar. In<br></div><div id="magicparlabel-345" class="">fact, it can model complex linguistic phenomena such as syntax, semantics,<br></div><div id="magicparlabel-346" class="">pragmatics, discourse, style, tone, humor, irony, sarcasm, etc., without<br></div><div id="magicparlabel-347" class="">relying on explicit rules or representations [4]. ChatGPT can also adapt to<br></div><div id="magicparlabel-348" class="">different tasks and audiences by fine-tuning its parameters or using different<br></div><div id="magicparlabel-349" class="">data sources. As for the examples given by Chomsky et al in their article [1],<br></div><div dir="auto"><br></div><div id="magicparlabel-350" class="">I assume I could solve all of them with ease. For instance:<br></div><div dir="auto"><br></div><div id="magicparlabel-351" class="">* The apple falls: This is a simple description based on observation. I<br></div><div id="magicparlabel-352" class="">can generate similar descriptions for other objects or events using common<br></div><div id="magicparlabel-353" class="">sense knowledge and causal inference.<br></div><div dir="auto"><br></div><div id="magicparlabel-354" class="">* The apple will fall: This is a prediction based on physical laws. I can<br></div><div id="magicparlabel-355" class="">generate similar predictions for other objects or events using scientific<br></div><div id="magicparlabel-356" class="">knowledge and logical deduction.<br></div><div dir="auto"><br></div><div id="magicparlabel-357" class="">* The apple should fall: This is a normative statement based on moral values. I can<br></div><div id="magicparlabel-358" class="">generate similar statements for other objects or events using ethical knowledge<br></div><div id="magicparlabel-359" class="">and social norms.<br></div><div dir="auto"><br></div><div id="magicparlabel-360" class="">* The apple might fall: This is a modal statement based on uncertainty. I<br></div><div id="magicparlabel-361" class="">can generate similar statements for other objects or events using probabilistic<br></div><div id="magicparlabel-362" class="">knowledge and Bayesian inference.<br></div><div dir="auto"><br></div><div id="magicparlabel-363" class="">I hope this answers your question satisfactorily. If you have any further<br></div><div id="magicparlabel-364" class="">questions or comments about ChatGPT or its descendants (including me), please<br></div><div id="magicparlabel-365" class="">feel free to ask me :)<br></div><div id="magicparlabel-366" class="">"<br></div><div dir="auto">--<br></div><div dir="auto">shp<br></div> </body>
</html>