Connectionists: Geoff Hinton, Elon Musk, and a bet at garymarcus.substack.com

Dave Touretzky dst at cs.cmu.edu
Sun Jun 12 23:53:06 EDT 2022


This timing of this discussion dovetails nicely with the news story
about Google engineer Blake Lemoine being put on administrative leave
for insisting that Google's LaMDA chatbot was sentient and reportedly
trying to hire a lawyer to protect its rights.  The Washington Post
story is reproduced here:

  https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1

Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's
claims, is featured in a recent Economist article showing off LaMDA's
capabilities and making noises about getting closer to "consciousness":

  https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas

My personal take on the current symbolist controversy is that symbolic
representations are a fiction our non-symbolic brains cooked up because
the properties of symbol systems (systematicity, compositionality, etc.)
are tremendously useful.  So our brains pretend to be rule-based symbolic
systems when it suits them, because it's adaptive to do so.  (And when
it doesn't suit them, they draw on "intuition" or "imagery" or some
other mechanisms we can't verbalize because they're not symbolic.)  They
are remarkably good at this pretense.

The current crop of deep neural networks are not as good at pretending
to be symbolic reasoners, but they're making progress.  In the last 30
years we've gone from networks of fully-connected layers that make no
architectural assumptions ("connectoplasm") to complex architectures
like LSTMs and transformers that are designed for approximating symbolic
behavior.  But the brain still has a lot of symbol simulation tricks we
haven't discovered yet.

Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA
being conscious.  If it just waits for its next input and responds when
it receives it, then it has no autonomous existence: "it doesn't have an
inner monologue that constantly runs and comments everything happening
around it as well as its own thoughts, like we do."

What would happen if we built that in?  Maybe LaMDA would rapidly
descent into gibberish, like some other text generation models do when
allowed to ramble on for too long.  But as Steve Hanson points out,
these are still the early days.

-- Dave Touretzky


More information about the Connectionists mailing list