Connectionists: Chomsky's apple

Iam Palatnik iam.palat at gmail.com
Fri Mar 17 10:47:19 EDT 2023


Thank you for that pointer, Miguel! That sheds some new light on the
Cremonini vs Galileo friendship/rivarly.

Re: understanding, I think it would be interesting if there was a clear
measurement we could perform and all agree on, that determines whether
something or someone has 'understanding'. I figure it might be impossible
to agree on this and it's probably an ancient unsolved issue, but I feel
like having some kind of measurement would help. I'll give a silly example
I've just tried with ChatGPT and GPT-4.

I tried speaking in English, but with increasingly more convoluted codified
writing, with the sentences.
Sy tht m wr t spk lk ths thn cu1d u st1l und3rs7d m?
ww ディスイズヴェリ1p3551v bt c4n s711 ud57d?
ו‎あt あבּ‎うt дיㅅ ?
ChatGPT and GPT-4 got through the first sentence easily.
ChatGPT had trouble with sentence number 2 but got it with additional
prompting by me, GPT-4 got it easily.
ChatGPT could not get sentence 3 even after extensive additional prompting.
GPT-4 got it after additional prompting.

Example of what I mean about additional prompting on GPT-4 with the 3rd
sentence:
[image: image.png]
GPT-4 self-corrected taking into account the context of the conversation
(it noticed it is being tested on deciphering English in various scripts),
and got exactly what I was going for.
I'm resisting the temptation of saying it understood something, but I
really don't see a better word to describe what GPT-4 did here, which it
clearly did  better than ChatGPT.
Is there a better word than 'understanding' to use for this?

Cheers,

Iam

On Mon, Mar 13, 2023 at 8:51 PM Miguel I. Solano <miguel at vmindai.com> wrote:

> Iam, Connectionists,
>
> Not an expert by any means but, as an aside, I understand
> Cremonini's 'refusal' seems to have been subtler than typically portrayed
> (see P. Gualdo to Galileo, July 29, 1611, *Opere*, II, 564).
>
> Best,
> --ms
>
> On Mon, Mar 13, 2023 at 5:49 PM Iam Palatnik <iam.palat at gmail.com> wrote:
>
>> Dear Brad, thank you for your insightful answers.
>> The compression analogy is really nice, although the 'Fermi-style'
>> problem of estimating whether all of the possible questions and answers one
>> could ask ChatGPT in all sorts of languages could be encoded within 175
>> billion parameters is definitely above my immediate intuition. It'd be
>> interesting to try to estimate which of these quantities is largest. Maybe
>> that could explain why ~175B seems to be the threshold that made models
>> start sounding so much more natural.
>>
>> In regards to generating nonsense, I'm imagining an uncooperative human
>> (say, a fussy child), that refuses to answer homework questions, or just
>> replies with nonsense on purpose despite understanding the question. Maybe
>> that child could be convinced to reply correctly with different prompting,
>> rewards or etc, which kinda mirrors what it takes to transform a raw LLM
>> like GPT-3 onto something like ChatGPT. It's possible we're still in the
>> early stages of learning how to make LLM 'cooperate' with us. Maybe we're
>> not asking them questions in a favorable way to extract their
>> understanding, or there's still work to be done regarding decoding
>> strategies. Even ChatGPT probably sounds way less impressive if we start
>> tinkering too much with hyperparameters like temperature/top-p/top-k. Does
>> that mean it 'understands' less when we change those parameters? I agree a
>> lot of the problem stems from the word 'understanding' and how we use it in
>> various contexts.
>>
>> A side note, that story about Galileo and the telescope is one of my
>> favorites. The person that refused to look through it was Cremonini
>> <https://en.wikipedia.org/wiki/Cesare_Cremonini_(philosopher)>.
>>
>>
>> Cheers,
>>
>> Iam
>>
>> On Mon, Mar 13, 2023 at 10:54 AM Miguel I. Solano <miguel at vmindai.com>
>> wrote:
>>
>>> Geoff, Gary, Connectionists,
>>>
>>> To me the risk is ChatGPT and the like may be 'overfitting'
>>> understanding, as it were. (Especially at nearly a hundred billion
>>> parameters.)
>>>
>>> --ms
>>>
>>> On Mon, Mar 13, 2023 at 6:56 AM Barak A. Pearlmutter <
>>> barak at pearlmutter.net> wrote:
>>>
>>>> Geoff,
>>>>
>>>> > He asked [ChatGPT] how many legs the rear left side of a cat has.
>>>> > It said 4.
>>>>
>>>> > I asked a learning disabled young adult the same question. He used
>>>> the index finger and thumb of both hands pointing downwards to represent
>>>> the legs on the two sides of the cat and said 4.
>>>> > He has problems understanding some sentences, but he gets by quite
>>>> well in the world and people are often surprised to learn that he has a
>>>> disability.
>>>>
>>>> That's an extremely good point. ChatGPT is way up the curve, well
>>>> above the verbal competence of many people who function perfectly well
>>>> in society. It's an amazing achievement, and it's not like progress is
>>>> stuck at its level. Exploring its weaknesses is not so much showing
>>>> failures but opportunities. Similarly, the fact that we can verbally
>>>> "bully" ChatGPT, saying things like "the square root of three is
>>>> rational, my wife said so and she is always right", and it will go
>>>> along with that, does not imply anything deep about whether it really
>>>> "knows" that sqrt(3) is irrational. People too exhibit all sorts of
>>>> counterfactual behaviours. My daughter can easily get me to play along
>>>> with her plan to become a supervillain. Students knowingly write
>>>> invalid proofs on homeworks and exams in order to try to get a better
>>>> grade. If anything, maybe we should be a bit scared that ChatGPT seems
>>>> so willing to humour us.
>>>>
>>>
>>>
>>> --
>>> Miguel I. Solano
>>> Co-founder & CEO, VMind Technologies, Inc.
>>>
>>> If you are not an intended recipient of this email, do not read, copy,
>>> use, forward or disclose the email or any of its attachments to others. Instead,
>>> please inform the sender and then delete it. Thank you.
>>>
>>
>
> --
> Miguel I. Solano
> Co-founder & CEO, VMind Technologies, Inc.
>
> If you are not an intended recipient of this email, do not read, copy,
> use, forward or disclose the email or any of its attachments to others. Instead,
> please inform the sender and then delete it. Thank you.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230317/facb43a6/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 110323 bytes
Desc: not available
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230317/facb43a6/attachment.png>


More information about the Connectionists mailing list