Connectionists: The symbolist quagmire

Tsvi Achler achler at gmail.com
Tue Jun 14 08:42:18 EDT 2022


Going along with the thread of conversation, the problem is that academia
is very political. The priority of everyone that thrives in it is to
maintain or increase their position, so much so that they refuse to
consider alternatives to their views.  This is amplified with a
multidisciplinary background.
My experience is neither Marcus and associates nor Hinton and associates
are willing to look at systems that:
1) are connectionst
2) are scalable
3) use so much feedback that methods like backprop wont work
4) have self-feedback helps maintain symbolic-like modularity
5) are unintuitive given today's norms
This goes on year after year, and the same old stories get rehashed.
The same is true of related brain sciences fields e.g. theoretical
neuroscience & cognitive psychology.
In the end only those who are entrenched and tend to the popularity contest
can get funding and publish in places where it will be read.  It is not
worth pursuing or publishing anything novel in academia.
The corporate world is all that is left because of the awful politics.
Moreover Marcus and Hinton themselves enjoy the less political
environment in corporate as well.
-Tsvi

On Mon, Jun 13, 2022 at 6:08 AM Gary Marcus <gary.marcus at nyu.edu> wrote:

> Cute phrase, but what does “symbolist quagmire” mean? Once upon  atime,
> Dave and Geoff were both pioneers in trying to getting symbols and neural
> nets to live in harmony. Don’t we still need do that, and if not, why not?
>
> Surely, at the very least
> - we want our AI to be able to take advantage of the (large) fraction of
> world knowledge that is represented in symbolic form (language, including
> unstructured text, logic, math, programming etc)
> - any model of the human mind ought be able to explain how humans can so
> effectively communicate via the symbols of language and how trained humans
> can deal with (to the extent that can) logic, math, programming, etc
>
> Folks like Bengio have joined me in seeing the need for “System II”
> processes. That’s a bit of a rough approximation, but I don’t see how we
> get to either AI or satisfactory models of the mind without confronting the
> “quagmire”
>
>
> On Jun 13, 2022, at 00:31, Ali Minai <minaiaa at gmail.com> wrote:
>
> 
> ".... symbolic representations are a fiction our non-symbolic brains
> cooked up because the properties of symbol systems (systematicity,
> compositionality, etc.) are tremendously useful.  So our brains pretend to
> be rule-based symbolic systems when it suits them, because it's adaptive to
> do so."
>
> Spot on, Dave! We should not wade back into the symbolist quagmire, but do
> need to figure out how apparently symbolic processing can be done by neural
> systems. Models like those of Eliasmith and Smolensky provide some insight,
> but still seem far from both biological plausibility and real-world scale.
>
> Best
>
> Ali
>
>
> *Ali A. Minai, Ph.D.*
> Professor and Graduate Program Director
> Complex Adaptive Systems Lab
> Department of Electrical Engineering & Computer Science
> 828 Rhodes Hall
> University of Cincinnati
> Cincinnati, OH 45221-0030
>
> Phone: (513) 556-4783
> Fax: (513) 556-7326
> Email: Ali.Minai at uc.edu
>           minaiaa at gmail.com
>
> WWW: https://eecs.ceas.uc.edu/~aminai/
> <https://urldefense.com/v3/__http://www.ece.uc.edu/*7Eaminai/__;JQ!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXYkS9WrFA$>
>
>
> On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky <dst at cs.cmu.edu> wrote:
>
>> This timing of this discussion dovetails nicely with the news story
>> about Google engineer Blake Lemoine being put on administrative leave
>> for insisting that Google's LaMDA chatbot was sentient and reportedly
>> trying to hire a lawyer to protect its rights.  The Washington Post
>> story is reproduced here:
>>
>>
>> https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1
>> <https://urldefense.com/v3/__https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1__;!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXapZaIeUg$>
>>
>> Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's
>> claims, is featured in a recent Economist article showing off LaMDA's
>> capabilities and making noises about getting closer to "consciousness":
>>
>>
>> https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas
>> <https://urldefense.com/v3/__https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas__;!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXbgg32qHQ$>
>>
>> My personal take on the current symbolist controversy is that symbolic
>> representations are a fiction our non-symbolic brains cooked up because
>> the properties of symbol systems (systematicity, compositionality, etc.)
>> are tremendously useful.  So our brains pretend to be rule-based symbolic
>> systems when it suits them, because it's adaptive to do so.  (And when
>> it doesn't suit them, they draw on "intuition" or "imagery" or some
>> other mechanisms we can't verbalize because they're not symbolic.)  They
>> are remarkably good at this pretense.
>>
>> The current crop of deep neural networks are not as good at pretending
>> to be symbolic reasoners, but they're making progress.  In the last 30
>> years we've gone from networks of fully-connected layers that make no
>> architectural assumptions ("connectoplasm") to complex architectures
>> like LSTMs and transformers that are designed for approximating symbolic
>> behavior.  But the brain still has a lot of symbol simulation tricks we
>> haven't discovered yet.
>>
>> Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA
>> being conscious.  If it just waits for its next input and responds when
>> it receives it, then it has no autonomous existence: "it doesn't have an
>> inner monologue that constantly runs and comments everything happening
>> around it as well as its own thoughts, like we do."
>>
>> What would happen if we built that in?  Maybe LaMDA would rapidly
>> descent into gibberish, like some other text generation models do when
>> allowed to ramble on for too long.  But as Steve Hanson points out,
>> these are still the early days.
>>
>> -- Dave Touretzky
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220614/c199d1ec/attachment.html>


More information about the Connectionists mailing list