<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><div dir="ltr"></div><div dir="ltr">My opinion (which for once is not really all that controversial in the AI community): this is nonsense on stilts, as discussed here: <a href="https://garymarcus.substack.com/p/nonsense-on-stilts?s=w">https://garymarcus.substack.com/p/nonsense-on-stilts</a></div><div dir="ltr"><br><blockquote type="cite">On Jun 12, 2022, at 22:37, Dave Touretzky <dst@cs.cmu.edu> wrote:<br><br></blockquote></div><blockquote type="cite"><div dir="ltr"><span>This timing of this discussion dovetails nicely with the news story</span><br><span>about Google engineer Blake Lemoine being put on administrative leave</span><br><span>for insisting that Google's LaMDA chatbot was sentient and reportedly</span><br><span>trying to hire a lawyer to protect its rights.  The Washington Post</span><br><span>story is reproduced here:</span><br><span></span><br><span>  https://urldefense.com/v3/__https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1__;!!BhJSzQqDqA!Wdar8UaWsOLbbb4Ui3RdnnZIw1Q1W0IntL9NR-7xZ6yKa9yneB8Iu_O-PxTyGUHyKsc2DdbuYKpHM6uR$ </span><br><span></span><br><span>Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's</span><br><span>claims, is featured in a recent Economist article showing off LaMDA's</span><br><span>capabilities and making noises about getting closer to "consciousness":</span><br><span></span><br><span>  https://urldefense.com/v3/__https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas__;!!BhJSzQqDqA!Wdar8UaWsOLbbb4Ui3RdnnZIw1Q1W0IntL9NR-7xZ6yKa9yneB8Iu_O-PxTyGUHyKsc2DdbuYHUIR2Go$ </span><br><span></span><br><span>My personal take on the current symbolist controversy is that symbolic</span><br><span>representations are a fiction our non-symbolic brains cooked up because</span><br><span>the properties of symbol systems (systematicity, compositionality, etc.)</span><br><span>are tremendously useful.  So our brains pretend to be rule-based symbolic</span><br><span>systems when it suits them, because it's adaptive to do so.  (And when</span><br><span>it doesn't suit them, they draw on "intuition" or "imagery" or some</span><br><span>other mechanisms we can't verbalize because they're not symbolic.)  They</span><br><span>are remarkably good at this pretense.</span><br><span></span><br><span>The current crop of deep neural networks are not as good at pretending</span><br><span>to be symbolic reasoners, but they're making progress.  In the last 30</span><br><span>years we've gone from networks of fully-connected layers that make no</span><br><span>architectural assumptions ("connectoplasm") to complex architectures</span><br><span>like LSTMs and transformers that are designed for approximating symbolic</span><br><span>behavior.  But the brain still has a lot of symbol simulation tricks we</span><br><span>haven't discovered yet.</span><br><span></span><br><span>Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA</span><br><span>being conscious.  If it just waits for its next input and responds when</span><br><span>it receives it, then it has no autonomous existence: "it doesn't have an</span><br><span>inner monologue that constantly runs and comments everything happening</span><br><span>around it as well as its own thoughts, like we do."</span><br><span></span><br><span>What would happen if we built that in?  Maybe LaMDA would rapidly</span><br><span>descent into gibberish, like some other text generation models do when</span><br><span>allowed to ramble on for too long.  But as Steve Hanson points out,</span><br><span>these are still the early days.</span><br><span></span><br><span>-- Dave Touretzky</span><br></div></blockquote></body></html>