Connectionists: Can LLMs think?
Thomas Miconi
thomas.miconi at gmail.com
Tue Mar 21 14:38:41 EDT 2023
Regarding LLMs, there's an interesting result which may not have attracted
sufficient notice.
LLMs out-of-the-box are notoriously bad at general arithmetic (unless
equipped with external tools). However, they can *learn* to perform true
arithmetic, simply by explaining it to them carefully, in a way that
generalizes to arbitrary-length numbers.
https://arxiv.org/abs/2211.09066
Clearly Eliza or N-grams can't do that. JPEGs can't do that either.
If this result is confirmed, it suggests that LLMs don't simply perform
"pattern-matching" over learned patterns. Rather, they have *some* ability
to extract new, true patterns from their inputs, and apply them correctly
to novel inputs.
I believe that's as good a definition of "intelligence" as any, so I'm
willing to accept that LLMs have *some* intelligence.
One possible source of disagreement is the great mismatch between their
limited "intelligence", and their remarkable verbal fluency: they can
produce amazing prose, but have difficulty with fine-grained grounding of
novel concepts ("they don't know what they're talking about", as soon as
the "about" crosses a low threshold of novelty-complexity product). We are
not used to dealing with such an outcome, which may make it difficult to
categorize these systems.
Thomas Miconi-
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230321/c6b4edac/attachment.html>
More information about the Connectionists
mailing list