<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<div class="moz-cite-prefix"><br>
Geoff,<br>
<br>
So, to clarify in greater depth, if the "idiots in the room"
really were just low-knowledge-level humans trying to understand
the question, then we would have a situation where they would
constitute a collective of simpler homunculi. In other words, we
would be in a Society of Mind situation.<br>
<br>
Now, if that were how Chat-GPT worked, I'd be jumping up and down
with excitement, and I would switch sides instantly and claim that
YES! this system can be ascribed some intelligence.<br>
<br>
But that is very much not what is happening in my "roomful of
idiots" scenario. These LLMs do not merge active, sub-cognitive
homunculi that form a society of mind. What the LLM does is
parasitic on the dead, recorded, intelligent utterings of the
world's humans.<br>
<br>
To be sure, I also feed parasitically on the dead, recorded,
intelligent utterings of a few of the world's humans, because I am
sitting here in the middle of my library, and I've read a lot of
those books. But unlike an LLM, I do not simply apply a glorified
averaging function to those books. I ingest and "understand"
those books.<br>
<br>
So those people in the roomful of idiots are just parroting back
the answers that were recorded in the giant database.<br>
<br>
And the sum total of all that parroting is not "understanding."<br>
<br>
Best<br>
<br>
Richard Loosemore<br>
<br>
On 3/10/23 1:49 PM, Geoffrey Hinton wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAK8Nvqo9rEuv_hjM9Tp+j5MMqGtmwOHazqRSjcqrbaCiraU=Yg@mail.gmail.com">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="ltr">So you think that the idiots in the room have no
understanding at all, even of simple things like there is an
arrogant man who does not think of them as people?
<div>Idiots understand less than the straight A graduate
students who we think of as normal, but they do have some
understanding. I don't think it will help any of us to get to
the truth of the matter if we think of understanding as
something we have a lot of and idiots and chatGPT have none
of. ChatGPT seems to me to be like an idiot savant whose
understanding is different from mine but not entirely absent.</div>
<div><br>
</div>
<div>Geoff</div>
<div><br>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Fri, Mar 10, 2023 at
11:35 AM Richard Loosemore <<a
href="mailto:rloosemore@susaro.com" moz-do-not-send="true"
class="moz-txt-link-freetext">rloosemore@susaro.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On
3/9/23 6:24 PM, Gary Marcus wrote:<br>
> If a broken clock were correct twice a day, would we give
it credit <br>
> for patches of understanding of time? If n-gram model
produced a <br>
> sequence that was 80% grammatical, would we attribute to
an underlying <br>
> understanding of grammar?<br>
<br>
Exactly!<br>
<br>
There isn't even an issue here. There shouldn't BE any
discussion, <br>
because we know what these things are doing.<br>
<br>
An LLM takes all the verbiage in the world, does an average
and finds <br>
all the question-answer linkages, and then in response to a
prompt it <br>
goes on a walk through that mess and cleans up the grammar.<br>
<br>
Oversimplified, but that's basically it. You get back a <br>
smushed-together answerish blob of text, like you would if you
asked a <br>
question in a roomful of idiots and tried to make sense of
what happens <br>
when they all try to answer at once.<br>
<br>
Among people who understand that, there shouldn't even BE a
question <br>
like "how much does it understand?".<br>
<br>
Richard Loosemore<br>
<br>
<br>
<br>
</blockquote>
</div>
</blockquote>
<br>
</body>
</html>