Connectionists: Roomful of idiots scenario

Richard Loosemore rloosemore at susaro.com
Fri Mar 10 17:31:59 EST 2023


Geoff,

So, to clarify in greater depth, if the "idiots in the room" really were 
just low-knowledge-level humans trying to understand the question, then 
we would have a situation where they would constitute a collective of 
simpler homunculi.  In other words, we would be in a Society of Mind 
situation.

Now, if that were how Chat-GPT worked, I'd be jumping up and down with 
excitement, and I would switch sides instantly and claim that YES! this 
system can be ascribed some intelligence.

But that is very much not what is happening in my "roomful of idiots" 
scenario.  These LLMs do not merge active, sub-cognitive homunculi that 
form a society of mind.  What the LLM does is parasitic on the dead, 
recorded, intelligent utterings of the world's humans.

To be sure, I also feed parasitically on the dead, recorded, intelligent 
utterings of a few of the world's humans, because I am sitting here in 
the middle of my library, and I've read a lot of those books.  But 
unlike an LLM, I do not simply apply a glorified averaging function to 
those books.  I ingest and "understand" those books.

So those people in the roomful of idiots are just parroting back the 
answers that were recorded in the giant database.

And the sum total of all that parroting is not "understanding."

Best

Richard Loosemore

On 3/10/23 1:49 PM, Geoffrey Hinton wrote:
> So you think that the idiots in the room have no understanding at all, 
> even of simple things like there is an arrogant man who does not think 
> of them as people?
> Idiots understand less than the straight A graduate students who we 
> think of as normal, but they do have some understanding.  I don't 
> think it will help any of us to get to the truth of the matter if we 
> think of understanding as something we have a lot of and idiots and 
> chatGPT have none of.  ChatGPT seems to me to be like an idiot savant 
> whose understanding is different from mine but not entirely absent.
>
> Geoff
>
>
> On Fri, Mar 10, 2023 at 11:35 AM Richard Loosemore 
> <rloosemore at susaro.com> wrote:
>
>     On 3/9/23 6:24 PM, Gary Marcus wrote:
>     > If a broken clock were correct twice a day, would we give it credit
>     > for patches of understanding of time? If n-gram model produced a
>     > sequence that was 80% grammatical, would we attribute to an
>     underlying
>     > understanding of grammar?
>
>     Exactly!
>
>     There isn't even an issue here.  There shouldn't BE any discussion,
>     because we know what these things are doing.
>
>     An LLM takes all the verbiage in the world, does an average and finds
>     all the question-answer linkages, and then in response to a prompt it
>     goes on a walk through that mess and cleans up the grammar.
>
>     Oversimplified, but that's basically it.  You get back a
>     smushed-together answerish blob of text, like you would if you
>     asked a
>     question in a roomful of idiots and tried to make sense of what
>     happens
>     when they all try to answer at once.
>
>     Among people who understand that, there shouldn't even BE a question
>     like "how much does it understand?".
>
>     Richard Loosemore
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230310/f05b86b3/attachment.html>


More information about the Connectionists mailing list