<html><head></head><body><div class="ydpc546046dyahoo-style-wrap" style="font-family:Helvetica Neue, Helvetica, Arial, sans-serif;font-size:16px;"><div dir="ltr" data-setdir="false"><div dir="ltr" data-setdir="false">Hi All,</div><div dir="ltr" data-setdir="false"> This is an interesting/entertaining discussion. "Understanding" has always been a somewhat nebulous concept. In the late 90s, Roger Penrose held (and continues to hold, if I am not mistaken), that at least in terms of mathematical "understanding", such a phenomenon couldn't possibly be captured by an effective procedure. I was sympathetic to this view in my early academic life but currently believe my old self was likely wrong :)<br></div><div dir="ltr" data-setdir="false"> With advanced generative models mucking about now, "understanding" is a more contentious (and less purely academic) topic now than it may have been decades ago. <br></div><div dir="ltr" data-setdir="false"><br></div><div dir="ltr" data-setdir="false">Some things I have been thinking about recently:</div><div dir="ltr" data-setdir="false"><br></div><div dir="ltr" data-setdir="false">1. We all understand things to varying degrees, and know of ways to improve said understanding. It is possible for us to understand something more precisely or deeply with experience or due diligence (zooming out, this reflects humanity's intellectual trajectory as a species...unless people believe there was a magical time when the ancients knew it all etc). In so far that human understanding (individual, collective and from a historical perspective), is a phenomenon that is marked by change, incremental as well as more dramatic (perhaps someone has modelled this as an SOC instance a la Bak & Sneppen's model of evolution or the original BTW?), is it not reasonable to expect attempts to capture aspects of human intelligence in machines to have a similar characteristic? In other words, ChatGPT's "understanding" may be rudimentary as opposed to nonexistent? <br></div><div dir="ltr" data-setdir="false">Looking at the counterexamples, I am struck by how we could do the same with humans on a range of topics/issues and demonstrate/claim understanding or the lack thereof. <br></div><div>Our (mis)understandings define our brief lives. <br></div><div dir="ltr" data-setdir="false"><br></div><div dir="ltr" data-setdir="false">2. Unless one embraces some sort of irreducibility argument I do not see why what humans can do cannot be captured by an artificial learning system. <br></div><div dir="ltr" data-setdir="false"><br></div><div dir="ltr" data-setdir="false">3. Would it help to speak of "understanding" as not just having useful internal representations but a capacity for "representational parsimony"? This of course is intimately connected to generation of "insights" and getting at the causal structure of the world. </div><div dir="ltr" data-setdir="false"><br></div><div dir="ltr" data-setdir="false">4. Given 1-3 above, how do we a) define understanding ? (yeah, very original, I know!), b) diagnose it/disambiguate it from behaviours that resemble it? <br></div><div dir="ltr" data-setdir="false"><br></div><div dir="ltr" data-setdir="false">Live Long and Prosper<br></div><div dir="ltr" data-setdir="false"><br></div><div dir="ltr" data-setdir="false">P.S: Regardless of what you make of my understanding or lack thereof, the contents of this email were generated by a human (moi) typing on a keyboard that is slightly worse for the wear :)<br></div><div dir="ltr" data-setdir="false"><br></div><div class="ydpc546046dsignature">Anand Ramamoorthy<div><br></div></div></div>
<div><br></div><div><br></div>
</div><div id="ydp7d3a228byahoo_quoted_9964112995" class="ydp7d3a228byahoo_quoted">
<div style="font-family:'Helvetica Neue', Helvetica, Arial, sans-serif;font-size:13px;color:#26282a;">
<div>
On Saturday, 18 March 2023 at 17:17:37 GMT, Kagan Tumer <kagan.tumer@oregonstate.edu> wrote:
</div>
<div><br></div>
<div><br></div>
<div><div dir="ltr"><br clear="none">I'm very reluctant to use the word "understand" beyond perhaps ChatGPT <br clear="none">understanding the structure of language (statistically). Here's an <br clear="none">example of a ChatGPT response to a simple arithmetic operation where:<br clear="none"><br clear="none">1- chatGPT was wrong;<br clear="none">2- worked out an example that showed it was wrong, but it didn't <br clear="none">register that and double down on its wrong conclusion;<br clear="none">3- gave a high level explanation (also wrong) of why it was right even <br clear="none">though it was wrong.<br clear="none"><br clear="none">you can forgive 1, but 2 and 3 clearly show ChatGPT does not actually <br clear="none">understand what it is saying.<br clear="none"><br clear="none">Kagan<br clear="none"><br clear="none"><br clear="none"><br clear="none">On 3/14/23 9:54 AM, Natsuki Oka wrote:<br clear="none">> [This email originated from outside of OSU. Use caution with links and <br clear="none">> attachments.]<br clear="none">> <br clear="none">> Judging from the responses below, ChatGPT understands counting and <br clear="none">> sorting to the extent that it can write a correct Python program but <br clear="none">> does not have the understanding to bring the proper execution results.<br clear="none">> count_and_sort.png<div class="ydp7d3a228byqt2482450717" id="ydp7d3a228byqtfd72313"><br clear="none">> Here's the program that ChatGPT created:<br clear="none">> ---<br clear="none">> sentences = [<br clear="none">> "The quick brown fox jumps over the lazy dog",<br clear="none">> "Python is a popular programming language",<br clear="none">> "I like to eat pizza for dinner",<br clear="none">> "The capital of France is Paris",<br clear="none">> "The cat in the hat wears a red and white striped hat",<br clear="none">> "My favorite color is blue",<br clear="none">> "The United States has fifty states",<br clear="none">> "There are seven days in a week",<br clear="none">> "I enjoy listening to music while I work",<br clear="none">> "Mount Everest is the highest mountain in the world"<br clear="none">> ]<br clear="none">> <br clear="none">> # sort the sentences by number of words<br clear="none">> sentences.sort(key=lambda x: len(x.split()))<br clear="none">> <br clear="none">> # print the sorted sentences with the number of words in parentheses<br clear="none">> for sentence in sentences:<br clear="none">> num_words = len(sentence.split())<br clear="none">> print(f"{sentence} ({num_words})")<br clear="none">> ---<br clear="none">> <br clear="none">> The execution of this program yields the following correct results:<br clear="none">> ---<br clear="none">> My favorite color is blue (5)<br clear="none">> Python is a popular programming language (6)<br clear="none">> The capital of France is Paris (6)<br clear="none">> The United States has fifty states (6)<br clear="none">> I like to eat pizza for dinner (7)<br clear="none">> There are seven days in a week (7)<br clear="none">> I enjoy listening to music while I work (8)<br clear="none">> The quick brown fox jumps over the lazy dog (9)<br clear="none">> Mount Everest is the highest mountain in the world (9)<br clear="none">> The cat in the hat wears a red and white striped hat (12)<br clear="none">> ---<br clear="none">> <br clear="none">> Oka Natsuki<br clear="none">> Miyazaki Sangyo-keiei University</div><br clear="none">> <br clear="none"><br clear="none"><br clear="none">-- <br clear="none">Kagan Tumer<br clear="none">Director, Collaborative Robotics and Intelligent Systems Institute<br clear="none">Professor, School of MIME<br clear="none">Oregon State University<br clear="none"><a shape="rect" href="http://engr.oregonstate.edu/~ktumer" rel="nofollow" target="_blank">http://engr.oregonstate.edu/~ktumer</a><br clear="none"><a shape="rect" href="https://kagantumer.com" rel="nofollow" target="_blank">https://kagantumer.com</a><div class="ydp7d3a228byqt2482450717" id="ydp7d3a228byqtfd58820"><br clear="none"></div></div></div>
</div>
</div></body></html>