<div dir="ltr"><div dir="ltr"><div dir="ltr"><div class="gmail_default" style="font-size:small">Re: the nature of understanding in these models: in <i>Critique of Pure Reason, </i>Kant argued that statistical impressions are only half of the story. Some basic, axiomatic ontology both enables and invokes the need for understanding.<br>In other words, a model could only understand something if it took as input not just the data, but the operators binding that data together, basic recognition that the data <i>exist</i>, and basic recognition that the operators binding the data also exist. <br>Then counterfactuals arise from processing both data and the axioms of its ontology: what can't exist, doesn't exist, can exist, probably exists. The absolute versions: what does exist or what cannot exist, can only be undertaken by reference to the forms in which the data are presented (space and time), so somehow, the brain observes not just input data but the <i>necessary facts of</i> input data. <br><br>This definition of understanding is different from, and independent of, intelligence. A weak understanding is still an understanding, and it is nothing at all if not applying structure to ontological propositions about what can or cannot be.<br>Without ontology and whatever necessary forms that ontology takes (e.g. space and time), the system is always divorced from the information it processes in the sense of Searle's "chinese room". There is no modeling of the information's nature <i>as</i> real or <i>as </i>counterfactual and so there is neither a criterion nor a need for classifying anything as understood or understandable.<br><br>Of course you can get ChatGPT to imitate all the <i>behaviors </i>of understanding, and for me that has made it at least as useful a research assistant as most humans. But I cannot see how it could possibly be subjected, as I am, to the immutable impression that things exist, and hence my need to organize information according to what exactly it is that exists, and what exactly does not, cannot, will not, and so on.<br><br><br><br></div></div><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Mar 14, 2023 at 4:12 AM Miguel I. Solano <<a href="mailto:miguel@vmindai.com">miguel@vmindai.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Iam, Connectionists,<div><br></div><div>Not an expert by any means but, as an aside, I understand Cremonini's 'refusal' seems to have been subtler than typically portrayed (see P. Gualdo to Galileo, July 29, 1611, <i>Opere</i>, II, 564).</div><div><br></div><div>Best,</div><div>--ms</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Mar 13, 2023 at 5:49 PM Iam Palatnik <<a href="mailto:iam.palat@gmail.com" target="_blank">iam.palat@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Dear Brad, thank you for your insightful answers.</div><div>The compression analogy is really nice, although the 'Fermi-style' problem of estimating whether all of the possible questions and answers one could ask ChatGPT in all sorts of languages could be encoded within 175 billion parameters is definitely above my immediate intuition. It'd be interesting to try to estimate which of these quantities is largest. Maybe that could explain why ~175B seems to be the threshold that made models start sounding so much more natural.</div><div><br></div><div>In regards to generating nonsense, I'm imagining an uncooperative human (say, a fussy child), that refuses to answer homework questions, or just replies with nonsense on purpose despite understanding the question. Maybe that child could be convinced to reply correctly with different prompting, rewards or etc, which kinda mirrors what it takes to transform a raw LLM like GPT-3 onto something like ChatGPT. It's possible we're still in the early stages of learning how to make LLM 'cooperate' with us. Maybe we're not asking them questions in a favorable way to extract their understanding, or there's still work to be done regarding decoding strategies. Even ChatGPT probably sounds way less impressive if we start tinkering too much with hyperparameters like temperature/top-p/top-k. Does that mean it 'understands' less when we change those parameters? I agree a lot of the problem stems from the word 'understanding' and how we use it in various contexts.</div><div><br></div><div>A side note, that story about Galileo and the telescope is one of my favorites. The person that refused to look through it was <a href="https://en.wikipedia.org/wiki/Cesare_Cremonini_(philosopher)" target="_blank">Cremonini</a>.</div><div><br></div><div><br></div><div>Cheers,</div><div><br></div><div>Iam<br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Mar 13, 2023 at 10:54 AM Miguel I. Solano <<a href="mailto:miguel@vmindai.com" target="_blank">miguel@vmindai.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Geoff, Gary, Connectionists,<div><br></div><div>To me the risk is ChatGPT and the like may be 'overfitting' understanding, as it were. (Especially at nearly a hundred billion parameters.)</div><div><br></div><div>--ms</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Mar 13, 2023 at 6:56 AM Barak A. Pearlmutter <<a href="mailto:barak@pearlmutter.net" target="_blank">barak@pearlmutter.net</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Geoff,<br>
<br>
> He asked [ChatGPT] how many legs the rear left side of a cat has.<br>
> It said 4.<br>
<br>
> I asked a learning disabled young adult the same question. He used the index finger and thumb of both hands pointing downwards to represent the legs on the two sides of the cat and said 4.<br>
> He has problems understanding some sentences, but he gets by quite well in the world and people are often surprised to learn that he has a disability.<br>
<br>
That's an extremely good point. ChatGPT is way up the curve, well<br>
above the verbal competence of many people who function perfectly well<br>
in society. It's an amazing achievement, and it's not like progress is<br>
stuck at its level. Exploring its weaknesses is not so much showing<br>
failures but opportunities. Similarly, the fact that we can verbally<br>
"bully" ChatGPT, saying things like "the square root of three is<br>
rational, my wife said so and she is always right", and it will go<br>
along with that, does not imply anything deep about whether it really<br>
"knows" that sqrt(3) is irrational. People too exhibit all sorts of<br>
counterfactual behaviours. My daughter can easily get me to play along<br>
with her plan to become a supervillain. Students knowingly write<br>
invalid proofs on homeworks and exams in order to try to get a better<br>
grade. If anything, maybe we should be a bit scared that ChatGPT seems<br>
so willing to humour us.<br>
</blockquote></div><br clear="all"><div><br></div><span>-- </span><br><div dir="ltr"><div dir="ltr"><div dir="ltr" style="color:rgb(34,34,34)">Miguel I. Solano</div><div dir="ltr" style="color:rgb(34,34,34)">Co-founder & CEO, VMind Technologies, Inc.</div><div dir="ltr"><div style="color:rgb(34,34,34)"><br></div><div><span style="color:rgb(0,0,0)">If you are not an intended recipient of this email,</span><span style="color:rgb(0,0,0)"> </span><span style="color:rgb(0,0,0)">do not read, copy, use, forward or disclose the email or any of its attachments to others. </span><span style="color:rgb(0,0,0)">Instead, please inform the sender and then delete it. Thank you.</span></div></div></div></div>
</blockquote></div>
</blockquote></div><br clear="all"><div><br></div><span>-- </span><br><div dir="ltr"><div dir="ltr"><div dir="ltr" style="color:rgb(34,34,34)">Miguel I. Solano</div><div dir="ltr" style="color:rgb(34,34,34)">Co-founder & CEO, VMind Technologies, Inc.</div><div dir="ltr"><div style="color:rgb(34,34,34)"><br></div><div><span style="color:rgb(0,0,0)">If you are not an intended recipient of this email,</span><span style="color:rgb(0,0,0)"> </span><span style="color:rgb(0,0,0)">do not read, copy, use, forward or disclose the email or any of its attachments to others. </span><span style="color:rgb(0,0,0)">Instead, please inform the sender and then delete it. Thank you.</span></div></div></div></div>
</blockquote></div></div></div>