Connectionists: Can LLMs think?
Geoffrey Hinton
geoffrey.hinton at gmail.com
Mon Mar 20 13:58:49 EDT 2023
LLM's do not do pattern matching in the sense that most people understand
it. They use the data to create huge numbers of features and
interactions between features such that these interactions can predict the
next word.
The first neural net language model (so far as I know) made bets about the
third term of a triple using word embedding vectors with 6 components.
Retrospectively, the components of these vectors could be interpreted as
sensible features for capturing the structure of the domain (which was very
conventional family relationships). For example, there was a three-valued
feature for a person's generation and the interactions between features
ensured that the triple Victoria has-father ? took the generation of
Victoria and produced an answer that was of a higher generation because it
understood that the relationship has-father requires this. Of course, in
complicated domains there will be huge numbers of regularities which will
make conflicting predictions for the next word but the consensus can still
be fairly reliable. I believe that factoring the discrete symbolic
information into a very large number of features and interactions IS
intuitive understanding and that this is true for both brains and LLMs even
though they may use different learning algorithms for arriving at these
factorizations. I am dismayed that so many people fall prey to the
well-known human disposition to think that there is something special about
people.
Geoff
On Mon, Mar 20, 2023 at 3:53 AM Paul Cisek <paul.cisek at umontreal.ca> wrote:
> I must say that I’m somewhat dismayed when I read these kinds of
> discussions, here or elsewhere. Sure, it’s understandable that many people
> are fooled into thinking that LLMs are intelligent, just like many people
> were fooled by Eliza and Eugene Goostman. Humans are predisposed into
> ascribing intention and purpose to events in the world, which helped them
> construct complex societies by (often correctly) interpreting the actions
> of other people around them. But this same predisposition also led them to
> believe that the volcano was angry when it erupted because they did
> something to offend the gods. Given how susceptible humans are to this
> false ascription of agency, it is not surprising that they get fooled when
> something acts in a complex way.
>
>
>
> But (most of) the people on this list know what’s under the hood! We know
> that LLMs are very good at pattern matching and completion, we know about
> the universal approximation theorem, we know that there is a lot of
> structure in the pattern of human-written text, and we know that humans are
> predisposed to ascribe meaning and intention even where there are none. We
> should therefore not be surprised that LLMs can produce text patterns that
> generalize well within-distribution but not so well out-of-distribution,
> and that when the former happens, people may be fooled into thinking they
> are speaking with a thinking being. Again, they were fooled by Eliza, and
> Eugene Goostman, and the Heider-Simmel illusion (ascribing emotion to
> animated triangles and circles)… and the rumblings of volcanos. But we know
> how LLMs and volcanos do what they do, and can explain their behavior
> without any additional assumptions (of thinking, or sentience, or
> whatever). So why add them?
>
>
>
> In a sense, we are like a bunch of professional magicians, who know where
> all of the little strings and hidden compartments are, and who know how we
> just redirected the audience’s attention to slip the card into our pocket…
> but then we are standing around backstage wondering: “Maybe there really is
> magic?”
>
>
>
> I think it’s not that machines have passed the Turing Test, but rather
> that we failed it.
>
>
>
> Paul Cisek
>
>
>
>
>
> *From:* Rothganger, Fredrick <frothga at sandia.gov>
> *Sent:* Thursday, March 16, 2023 11:39 AM
> *To:* connectionists at mailman.srv.cs.cmu.edu
> *Subject:* Connectionists: Can LLMs think?
>
>
>
> Noting the examples that have come up on this list over the last week,
> it's interesting that it takes some of the most brilliant AI researchers in
> the world to devise questions that break LLMs. Chatbots have always been
> able to fool some people some of the time, ever since ELIZA. But we now
> have systems that can fool a lot of people a lot of the time, and even the
> occasional expert who loses their perspective and comes to believe the
> system is sentient. LLMs have either already passed the classic Turning
> test, or are about to in the next generation.
>
>
>
> What does that mean exactly? Turing's expectation was that "the use of
> words and general educated opinion will have altered so much that one will
> be able to speak of machines thinking without expecting to be
> contradicted". The ongoing discussion here is an indication that we are
> approaching that threshold. For the average person, we've probably already
> passed it.
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230320/492fe32d/attachment.html>
More information about the Connectionists
mailing list