Connectionists: Chomsky's apple

Anand Ramamoorthy valvilraman at yahoo.co.in
Mon Mar 20 05:41:37 EDT 2023


Hi All,                   This is an interesting/entertaining discussion. "Understanding" has always been a somewhat nebulous concept. In the late 90s, Roger Penrose held (and continues to hold, if I am not mistaken), that at least in terms of mathematical "understanding", such a phenomenon couldn't possibly be captured by an effective procedure. I was sympathetic to this view in my early academic life but currently believe my old self was likely wrong :)
 With advanced generative models mucking about now, "understanding" is a more contentious (and less purely academic) topic now than it may have been decades ago. 

Some things I have been thinking about recently:
1. We all understand things to varying degrees, and know of ways to improve said understanding. It is possible for us to understand something more precisely or deeply with experience or due diligence (zooming out, this reflects humanity's intellectual trajectory as a species...unless people believe there was a magical time when the ancients knew it all etc). In so far that human understanding (individual, collective and from a historical perspective), is a phenomenon that is marked by change, incremental as well as more dramatic  (perhaps someone has modelled this as an SOC instance a la Bak & Sneppen's model of evolution or the original BTW?), is it not reasonable to expect attempts to capture aspects of human intelligence in machines to have a similar characteristic? In other words, ChatGPT's "understanding" may be rudimentary as opposed to nonexistent? 
Looking at the counterexamples, I am struck by how we could do the same with humans on a range of topics/issues and demonstrate/claim understanding or the lack thereof. 
Our (mis)understandings define our brief lives. 

2. Unless one embraces some sort of irreducibility argument I do not see why what humans can do cannot be captured by an artificial learning system. 

3. Would it help to speak of "understanding" as not just having useful internal representations but a capacity for "representational parsimony"? This of course is intimately connected to generation of "insights" and getting at the causal structure of the world. 
4. Given 1-3 above, how do we a) define understanding ? (yeah, very original, I know!), b) diagnose it/disambiguate it from behaviours that resemble it? 

Live Long and Prosper

P.S: Regardless of what you make of my understanding or lack thereof, the contents of this email were generated by a human (moi) typing on a keyboard that is slightly worse for the wear :)

Anand Ramamoorthy
 

    On Saturday, 18 March 2023 at 17:17:37 GMT, Kagan Tumer <kagan.tumer at oregonstate.edu> wrote:  
 
 
I'm very reluctant to use the word "understand" beyond perhaps ChatGPT 
understanding the structure of language (statistically). Here's an 
example of a ChatGPT response to a simple arithmetic operation where:

1- chatGPT was wrong;
2- worked out an example that showed it was wrong, but it didn't 
register that and double down on its wrong conclusion;
3- gave a high level explanation (also wrong) of why it was right even 
though it was wrong.

you can forgive 1, but 2 and 3 clearly show ChatGPT does not actually 
understand what it is saying.

Kagan



On 3/14/23 9:54 AM, Natsuki Oka wrote:
> [This email originated from outside of OSU. Use caution with links and 
> attachments.]
> 
> Judging from the responses below, ChatGPT understands counting and 
> sorting to the extent that it can write a correct Python program but 
> does not have the understanding to bring the proper execution results.
> count_and_sort.png
> Here's the program that ChatGPT created:
> ---
> sentences = [
>      "The quick brown fox jumps over the lazy dog",
>      "Python is a popular programming language",
>      "I like to eat pizza for dinner",
>      "The capital of France is Paris",
>      "The cat in the hat wears a red and white striped hat",
>      "My favorite color is blue",
>      "The United States has fifty states",
>      "There are seven days in a week",
>      "I enjoy listening to music while I work",
>      "Mount Everest is the highest mountain in the world"
> ]
> 
> # sort the sentences by number of words
> sentences.sort(key=lambda x: len(x.split()))
> 
> # print the sorted sentences with the number of words in parentheses
> for sentence in sentences:
>      num_words = len(sentence.split())
>      print(f"{sentence} ({num_words})")
> ---
> 
> The execution of this program yields the following correct results:
> ---
> My favorite color is blue (5)
> Python is a popular programming language (6)
> The capital of France is Paris (6)
> The United States has fifty states (6)
> I like to eat pizza for dinner (7)
> There are seven days in a week (7)
> I enjoy listening to music while I work (8)
> The quick brown fox jumps over the lazy dog (9)
> Mount Everest is the highest mountain in the world (9)
> The cat in the hat wears a red and white striped hat (12)
> ---
> 
> Oka Natsuki
> Miyazaki Sangyo-keiei University
> 


-- 
Kagan Tumer
Director, Collaborative Robotics and Intelligent Systems Institute
Professor, School of MIME
Oregon State University
http://engr.oregonstate.edu/~ktumer
https://kagantumer.com
  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230320/0f1a4577/attachment.html>


More information about the Connectionists mailing list