Connectionists: LaMDA, Lemoine and Sentience

Geoffrey Hinton geoffrey.hinton at gmail.com
Mon Jun 13 12:43:58 EDT 2022


Your last message refers us to a site where you say:

"Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent.1
<https://garymarcus.substack.com/p/nonsense-on-stilts?s=w#footnote-1> All
they do is match patterns, draw from massive statistical databases of human
language. The patterns might be cool, but language these systems utter
doesn’t actually mean anything at all."

It's that kind of confident dismissal of work that most researchers think
is very impressive that irritates people like me.

Back in the time of Winograd's thesis, the AI community accepted that a
computer understood sentences like "put the red block in the blue box" if
it actually followed that instruction in a simulated world.
So if I tell a robot to open the drawer and take out a pen and it does just
that, would you accept that it understood the language and was just a
little bit intelligent?  And if the action consists of drawing an
appropriate picture would that also indicate it understood the language?

Where does your extreme confidence that the language does not mean anything
at all come from?   Your arguments typically consist of finding cases where
a neural net screws up and then saying "See, it doesn't really understand
language".  When a learning disabled child fails to understand a sentence,
do you conclude that they do not understand language at all even though
they can correctly answer quite a few questions and laugh appropriately at
quite a few jokes?

Geoff


Geoff


Geoff


On Mon, Jun 13, 2022 at 9:16 AM Gary Marcus <gary.marcus at nyu.edu> wrote:

> My opinion (which for once is not really all that controversial in the AI
> community): this is nonsense on stilts, as discussed here:
> https://garymarcus.substack.com/p/nonsense-on-stilts
> <https://garymarcus.substack.com/p/nonsense-on-stilts?s=w>
>
> On Jun 12, 2022, at 22:37, Dave Touretzky <dst at cs.cmu.edu> wrote:
>
> This timing of this discussion dovetails nicely with the news story
> about Google engineer Blake Lemoine being put on administrative leave
> for insisting that Google's LaMDA chatbot was sentient and reportedly
> trying to hire a lawyer to protect its rights.  The Washington Post
> story is reproduced here:
>
>
> https://urldefense.com/v3/__https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1__;!!BhJSzQqDqA!Wdar8UaWsOLbbb4Ui3RdnnZIw1Q1W0IntL9NR-7xZ6yKa9yneB8Iu_O-PxTyGUHyKsc2DdbuYKpHM6uR$
>
> Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's
> claims, is featured in a recent Economist article showing off LaMDA's
> capabilities and making noises about getting closer to "consciousness":
>
>
> https://urldefense.com/v3/__https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas__;!!BhJSzQqDqA!Wdar8UaWsOLbbb4Ui3RdnnZIw1Q1W0IntL9NR-7xZ6yKa9yneB8Iu_O-PxTyGUHyKsc2DdbuYHUIR2Go$
>
> My personal take on the current symbolist controversy is that symbolic
> representations are a fiction our non-symbolic brains cooked up because
> the properties of symbol systems (systematicity, compositionality, etc.)
> are tremendously useful.  So our brains pretend to be rule-based symbolic
> systems when it suits them, because it's adaptive to do so.  (And when
> it doesn't suit them, they draw on "intuition" or "imagery" or some
> other mechanisms we can't verbalize because they're not symbolic.)  They
> are remarkably good at this pretense.
>
> The current crop of deep neural networks are not as good at pretending
> to be symbolic reasoners, but they're making progress.  In the last 30
> years we've gone from networks of fully-connected layers that make no
> architectural assumptions ("connectoplasm") to complex architectures
> like LSTMs and transformers that are designed for approximating symbolic
> behavior.  But the brain still has a lot of symbol simulation tricks we
> haven't discovered yet.
>
> Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA
> being conscious.  If it just waits for its next input and responds when
> it receives it, then it has no autonomous existence: "it doesn't have an
> inner monologue that constantly runs and comments everything happening
> around it as well as its own thoughts, like we do."
>
> What would happen if we built that in?  Maybe LaMDA would rapidly
> descent into gibberish, like some other text generation models do when
> allowed to ramble on for too long.  But as Steve Hanson points out,
> these are still the early days.
>
> -- Dave Touretzky
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220613/c675683f/attachment.html>


More information about the Connectionists mailing list