Connectionists: Geoff Hinton, Elon Musk, and a bet at garymarcus.substack.com
jose at rubic.rutgers.edu
jose at rubic.rutgers.edu
Mon Jun 13 08:12:04 EDT 2022
I agree David, Ali, this is s succinct way of putting the
neuroscience/cognitive problem.
It also underlies the very reason why "hybrid" systems or approaches in
the end makes no sense.
I think, on the other hand, the rush to consciousness of transformers
and the laMDA (Lemoine's "friend" in his computer) is a also a need to
capture symbol processing just through claims of human-like performance
without the serious toil this will take in the future.
Again, I think a relevant project here would be to attempt to replicate
with DL-rnn, Yang and Piatiadosi's PNAS language learning system--which
is a completely symbolic-- and very general over the Chomsky-Miller
grammer classes. Let me know, happy to collaborate on something like this.
Best
Steve
On 6/13/22 2:31 AM, Ali Minai wrote:
> ".... symbolic representations are a fiction our non-symbolic brains
> cooked up because the properties of symbol systems (systematicity,
> compositionality, etc.) are tremendously useful. So our brains
> pretend to be rule-based symbolic systems when it suits them, because
> it's adaptive to do so."
>
> Spot on, Dave! We should not wade back into the symbolist quagmire,
> but do need to figure out how apparently symbolic processing can be
> done by neural systems. Models like those of Eliasmith and Smolensky
> provide some insight, but still seem far from both biological
> plausibility and real-world scale.
>
> Best
>
> Ali
>
>
> *Ali A. Minai, Ph.D.*
> Professor and Graduate Program Director
> Complex Adaptive Systems Lab
> Department of Electrical Engineering & Computer Science
> 828 Rhodes Hall
> University of Cincinnati
> Cincinnati, OH 45221-0030
>
> Phone: (513) 556-4783
> Fax: (513) 556-7326
> Email: Ali.Minai at uc.edu <mailto:Ali.Minai at uc.edu>
> minaiaa at gmail.com <mailto:minaiaa at gmail.com>
>
> WWW: https://eecs.ceas.uc.edu/~aminai/ <http://www.ece.uc.edu/%7Eaminai/>
>
>
> On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky <dst at cs.cmu.edu
> <mailto:dst at cs.cmu.edu>> wrote:
>
> This timing of this discussion dovetails nicely with the news story
> about Google engineer Blake Lemoine being put on administrative leave
> for insisting that Google's LaMDA chatbot was sentient and reportedly
> trying to hire a lawyer to protect its rights. The Washington Post
> story is reproduced here:
>
> https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1
> <https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1>
>
> Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's
> claims, is featured in a recent Economist article showing off LaMDA's
> capabilities and making noises about getting closer to
> "consciousness":
>
> https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas
> <https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas>
>
> My personal take on the current symbolist controversy is that symbolic
> representations are a fiction our non-symbolic brains cooked up
> because
> the properties of symbol systems (systematicity, compositionality,
> etc.)
> are tremendously useful. So our brains pretend to be rule-based
> symbolic
> systems when it suits them, because it's adaptive to do so. (And when
> it doesn't suit them, they draw on "intuition" or "imagery" or some
> other mechanisms we can't verbalize because they're not
> symbolic.) They
> are remarkably good at this pretense.
>
> The current crop of deep neural networks are not as good at pretending
> to be symbolic reasoners, but they're making progress. In the last 30
> years we've gone from networks of fully-connected layers that make no
> architectural assumptions ("connectoplasm") to complex architectures
> like LSTMs and transformers that are designed for approximating
> symbolic
> behavior. But the brain still has a lot of symbol simulation
> tricks we
> haven't discovered yet.
>
> Slashdot reader ZiggyZiggyZig had an interesting argument against
> LaMDA
> being conscious. If it just waits for its next input and responds
> when
> it receives it, then it has no autonomous existence: "it doesn't
> have an
> inner monologue that constantly runs and comments everything happening
> around it as well as its own thoughts, like we do."
>
> What would happen if we built that in? Maybe LaMDA would rapidly
> descent into gibberish, like some other text generation models do when
> allowed to ramble on for too long. But as Steve Hanson points out,
> these are still the early days.
>
> -- Dave Touretzky
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220613/8fa63308/attachment.html>
More information about the Connectionists
mailing list