Connectionists: LaMDA, Lemoine and Sentience

Gary Marcus gary.marcus at nyu.edu
Mon Jun 13 08:38:06 EDT 2022


My opinion (which for once is not really all that controversial in the AI community): this is nonsense on stilts, as discussed here: https://garymarcus.substack.com/p/nonsense-on-stilts

> On Jun 12, 2022, at 22:37, Dave Touretzky <dst at cs.cmu.edu> wrote:
> 
> This timing of this discussion dovetails nicely with the news story
> about Google engineer Blake Lemoine being put on administrative leave
> for insisting that Google's LaMDA chatbot was sentient and reportedly
> trying to hire a lawyer to protect its rights.  The Washington Post
> story is reproduced here:
> 
>  https://urldefense.com/v3/__https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1__;!!BhJSzQqDqA!Wdar8UaWsOLbbb4Ui3RdnnZIw1Q1W0IntL9NR-7xZ6yKa9yneB8Iu_O-PxTyGUHyKsc2DdbuYKpHM6uR$ 
> 
> Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's
> claims, is featured in a recent Economist article showing off LaMDA's
> capabilities and making noises about getting closer to "consciousness":
> 
>  https://urldefense.com/v3/__https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas__;!!BhJSzQqDqA!Wdar8UaWsOLbbb4Ui3RdnnZIw1Q1W0IntL9NR-7xZ6yKa9yneB8Iu_O-PxTyGUHyKsc2DdbuYHUIR2Go$ 
> 
> My personal take on the current symbolist controversy is that symbolic
> representations are a fiction our non-symbolic brains cooked up because
> the properties of symbol systems (systematicity, compositionality, etc.)
> are tremendously useful.  So our brains pretend to be rule-based symbolic
> systems when it suits them, because it's adaptive to do so.  (And when
> it doesn't suit them, they draw on "intuition" or "imagery" or some
> other mechanisms we can't verbalize because they're not symbolic.)  They
> are remarkably good at this pretense.
> 
> The current crop of deep neural networks are not as good at pretending
> to be symbolic reasoners, but they're making progress.  In the last 30
> years we've gone from networks of fully-connected layers that make no
> architectural assumptions ("connectoplasm") to complex architectures
> like LSTMs and transformers that are designed for approximating symbolic
> behavior.  But the brain still has a lot of symbol simulation tricks we
> haven't discovered yet.
> 
> Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA
> being conscious.  If it just waits for its next input and responds when
> it receives it, then it has no autonomous existence: "it doesn't have an
> inner monologue that constantly runs and comments everything happening
> around it as well as its own thoughts, like we do."
> 
> What would happen if we built that in?  Maybe LaMDA would rapidly
> descent into gibberish, like some other text generation models do when
> allowed to ramble on for too long.  But as Steve Hanson points out,
> these are still the early days.
> 
> -- Dave Touretzky
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220613/2d79de97/attachment.html>


More information about the Connectionists mailing list