<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<div class="moz-cite-prefix"><br>
You've misunderstood the scenario.<br>
<br>
Those "idiots" in the room don't understand the answers they are
shouting out.<br>
<br>
Each one is just reading a fragment from a data ocean.<br>
<br>
They have zero understanding.<br>
<br>
On 3/10/23 1:49 PM, Geoffrey Hinton wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAK8Nvqo9rEuv_hjM9Tp+j5MMqGtmwOHazqRSjcqrbaCiraU=Yg@mail.gmail.com">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="ltr">So you think that the idiots in the room have no
understanding at all, even of simple things like there is an
arrogant man who does not think of them as people?
<div>Idiots understand less than the straight A graduate
students who we think of as normal, but they do have some
understanding. I don't think it will help any of us to get to
the truth of the matter if we think of understanding as
something we have a lot of and idiots and chatGPT have none
of. ChatGPT seems to me to be like an idiot savant whose
understanding is different from mine but not entirely absent.</div>
<div><br>
</div>
<div>Geoff</div>
<div><br>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Fri, Mar 10, 2023 at
11:35 AM Richard Loosemore <<a
href="mailto:rloosemore@susaro.com" moz-do-not-send="true"
class="moz-txt-link-freetext">rloosemore@susaro.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On
3/9/23 6:24 PM, Gary Marcus wrote:<br>
> If a broken clock were correct twice a day, would we give
it credit <br>
> for patches of understanding of time? If n-gram model
produced a <br>
> sequence that was 80% grammatical, would we attribute to
an underlying <br>
> understanding of grammar?<br>
<br>
Exactly!<br>
<br>
There isn't even an issue here. There shouldn't BE any
discussion, <br>
because we know what these things are doing.<br>
<br>
An LLM takes all the verbiage in the world, does an average
and finds <br>
all the question-answer linkages, and then in response to a
prompt it <br>
goes on a walk through that mess and cleans up the grammar.<br>
<br>
Oversimplified, but that's basically it. You get back a <br>
smushed-together answerish blob of text, like you would if you
asked a <br>
question in a roomful of idiots and tried to make sense of
what happens <br>
when they all try to answer at once.<br>
<br>
Among people who understand that, there shouldn't even BE a
question <br>
like "how much does it understand?".<br>
<br>
Richard Loosemore<br>
<br>
<br>
<br>
</blockquote>
</div>
</blockquote>
<br>
</body>
</html>