<div dir="ltr">Iam, to answer your very good questions.... <div><br></div><div>The mismatch between the # weights and the database of tokens it indexes can be explained as compression. Likewise a jpeg image can provide a lossy lookup table for any pixel value in a 1024x1024 image even though the # of bits in the jpeg file + decoder is much smaller than the # of pixels would seem to require. </div><div><br></div><div>Re: creating something new, this is part of a complex debate about interpolation vs extrapolation. The formula y = 4x + 3 can also provide an infinite number of "new" values compared to the 2 data points that defined the equation but we don't think this equation is creating something new. One can also ask whether an algorithm that generates new music in the style of Brahms, or even combining two music styles together is truly creative, or is it just exploring a space in between those styles? If such models were retrained on their own outputs for a long time, would they eventually generate fundamentally new styles of music? Would their output devolve into random noise? or would their output be forever trapped in a bubble defined by their original input?</div><div><br></div><div>re: Understanding: One obstacle to defining this productivity as evidence of understanding is that variations of those questions can easily produce nonsense answers, as this thread illustrates. </div><div><br></div><div>I think it is pretty clear that ChatGPT has a better model of english than a dog or Alexa. ChatGPT is quite good at the syntax. It is less clear that it is good at the semantic aspects given all the counterexamples one can generate. However it is not clear what it means to "understand English", which makes this question hard to answer. </div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Mar 10, 2023 at 3:30 AM Iam Palatnik <<a href="mailto:iam.palat@gmail.com">iam.palat@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>I feel as if I have heard the argument that LLMs or other generative models are retrieving text (or images, in the case of Stable Diffusion) from a database, but I don't understand the origin of this argument. Isn't a model like ChatGPT, at the end of the day, just a list of weight matrices? The 175 billion weights in those matrices surely can't directly hold the trillions of tokens seen during training in a retrievable format, so isn't this enough to say that the model is almost surely not doing a direct retrieval of text from within itself? I might have misunderstood the wording.</div><div><br></div><div>When such models generate input that is different from the training data by whatever metric, what is the main obstacle in saying that they created something new?</div><div>When the model correctly answers to tasks it has never previously seen, in well formed language, what is the main obstacle in saying it understood something?</div>When a dog reacts to a command and sits or fetches, or when Alexa reacts to a command and turns the lights on, what sets these two scenarios significantly apart in terms of 'understanding'? And then, would it be too unfair to say ChatGPT maybe understands English better than both the dog and Alexa?<br><div><div><br></div><div><br></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Mar 9, 2023 at 4:47 AM Stefan C Kremer <<a href="mailto:skremer@uoguelph.ca" target="_blank">skremer@uoguelph.ca</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
One of the criticisms against John Searle’s argument (<a href="https://en.wikipedia.org/wiki/Chinese_room" target="_blank">https://en.wikipedia.org/wiki/Chinese_room</a>) has always been that it wouldn’t be possible to construct a book comprehensive enough to answer all the
queries, or that it would take too long to produce an output. Chat GPT shows that we have at least approached that limitation (perhaps not truly overcome it…yet).
<div><br>
</div>
<div>The question posed by Searle (and answered with a “yes” by Chomsky in his thinking about counterfactuals, causal explanation, and thinking) is: is there a difference between understanding and simulated understanding?</div>
<div><br>
</div>
<div>I don’t know how we could ever answer this question (an to me that’s the important point), but it seems that Searle’s thought experiment becomes more relevant, now that a feasible implementation can be constructed, than when it was originally proposed.</div>
<div><br>
</div>
<div>Stefan</div>
<div><br>
<div><br>
<blockquote type="cite">
<div>On Mar 8, 2023, at 12:25 PM, Terry Sejnowski <<a href="mailto:terry@salk.edu" target="_blank">terry@salk.edu</a>> wrote:</div>
<br>
<div>
<div>CAUTION: This email originated from outside of the University of Guelph. Do not click links or open attachments unless you recognize the sender and know the content is safe. If in doubt, forward suspicious emails to <a href="mailto:IThelp@uoguelph.ca" target="_blank">IThelp@uoguelph.ca</a><br>
<br>
<br>
I have always been impressed with Chomsky's ability to use plausible<br>
arguments to make his case even when they were fallacious.<br>
<br>
<a href="https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html#commentsContainer" target="_blank">https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html#commentsContainer</a><br>
<br>
"Here’s an example. Suppose you are holding an apple in your hand. Now you let the apple go. You observe the result and say, “The apple falls.” That is a description. A prediction might have been the statement “The apple will fall if I open my hand.” Both are
valuable, and both can be correct. But an explanation is something more: It includes not only descriptions and predictions but also counterfactual conjectures like “Any such object would fall,” plus the additional clause “because of the force of gravity” or
“because of the curvature of space-time” or whatever. That is a causal explanation: “The apple would not have fallen but for the force of gravity.” That is thinking."<br>
<br>
His apple example sounded convincing, so I tested GPT on it:<br>
<br>
Me: If I hold an apple in my hand and let it go what will happen?<br>
<br>
GPT: The apple will fall to the ground due to the force of gravity.<br>
<br>
Me: What about any such object?<br>
<br>
GPT: Any object released from a person's hand will fall to the ground due to the force of gravity.<br>
<br>
Me: What would happen if there wasn't a force of gravity?<br>
<br>
GPT: If there were no force of gravity, objects would not be able to stay on the ground and would float away in all directions<br>
<br>
Chomsky defined thinking by the correct answers to these questions. <br>
<br>
Alas, the tide has turned.<br>
<br>
Terry<br>
<br>
-----<br>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote></div>
</blockquote></div><br clear="all"><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr">Brad Wyble<br>Professor<span style="font-size:12.8px"> of Psychology</span> <br>Penn State University<div><br></div></div></div></div></div></div>