Connectionists: The symbolist quagmire

jose at rubic.rutgers.edu jose at rubic.rutgers.edu
Mon Jun 13 11:44:14 EDT 2022


We prefer the explicit/implicit cognitive psych refs. but System II is 
not symbolic.

See the AIHUB conversation about this.. we discuss this specifically.


Steve


On 6/13/22 10:00 AM, Gary Marcus wrote:
> Please reread my sentence and reread his recent work. Bengio has 
> absolutely joined in calling for System II processes. Sample is his 
> 2019 NeurIPS keynote: 
> https://www.newworldai.com/system-1-deep-learning-system-2-deep-learning-yoshua-bengio/ 
> <https://www.newworldai.com/system-1-deep-learning-system-2-deep-learning-yoshua-bengio/>
>
> Whether he wants to call it a hybrid approach is his business but he 
> certainly sees that traditional approaches are not covering things 
> like causality and abstract generalization. Maybe he will find a new 
> way, but he recognizes what has not been covered with existing ways.
>
> And he is emphasizing both relationships and out of distribution 
> learning, just as I have been for a long time. From his most recent 
> arXiv a few days ago, the first two sentences of which sounds almost 
> exactly like what I have been saying for years:
>
> Submitted on 9 Jun 2022]
>
>
>   On Neural Architecture Inductive Biases for Relational Tasks
>
> Giancarlo Kerg 
> <https://arxiv.org/search/cs?searchtype=author&query=Kerg%2C+G>, 
> Sarthak Mittal 
> <https://arxiv.org/search/cs?searchtype=author&query=Mittal%2C+S>, 
> David Rolnick 
> <https://arxiv.org/search/cs?searchtype=author&query=Rolnick%2C+D>, 
> Yoshua Bengio 
> <https://arxiv.org/search/cs?searchtype=author&query=Bengio%2C+Y>, 
> Blake Richards 
> <https://arxiv.org/search/cs?searchtype=author&query=Richards%2C+B>, 
> Guillaume Lajoie 
> <https://arxiv.org/search/cs?searchtype=author&query=Lajoie%2C+G>
>
>     Current deep learning approaches have shown good in-distribution
>     generalization performance, but struggle with out-of-distribution
>     generalization. This is especially true in the case of tasks
>     involving abstract relations like recognizing rules in sequences,
>     as we find in many intelligence tests. Recent work has explored
>     how forcing relational representations to remain distinct from
>     sensory representations, as it seems to be the case in the brain,
>     can help artificial systems. Building on this work, we further
>     explore and formalize the advantages afforded by 'partitioned'
>     representations of relations and sensory details, and how this
>     inductive bias can help recompose learned relational structure in
>     newly encountered settings. We introduce a simple architecture
>     based on similarity scores which we name Compositional Relational
>     Network (CoRelNet). Using this model, we investigate a series of
>     inductive biases that ensure abstract relations are learned and
>     represented distinctly from sensory data, and explore their
>     effects on out-of-distribution generalization for a series of
>     relational psychophysics tasks. We find that simple architectural
>     choices can outperform existing models in out-of-distribution
>     generalization. Together, these results show that partitioning
>     relational representations from other information streams may be a
>     simple way to augment existing network architectures' robustness
>     when performing out-of-distribution relational computations.
>
>
>     Kind of scandalous that he doesn’t ever cite me for having framed
>     that argument, even if I have repeatedly called his attention to
>     that oversight, but that’s another story for a day, in which I
>     elaborate on some Schmidhuber’s observations on history.
>
>
> Gary
>
>> On Jun 13, 2022, at 06:44, jose at rubic.rutgers.edu wrote:
>>
>> 
>>
>> No Yoshua has *not* joined you ---Explicit processes, memory, problem 
>> solving. .are not Symbolic per se.
>>
>> These original distinctions in memory and learning were  from Endel 
>> Tulving and of course there are brain structures that support the 
>> distinctions.
>>
>> and Yoshua is clear about that in discussions I had with him in AIHUB
>>
>> He's definitely not looking to create some hybrid approach..
>>
>> Steve
>>
>> On 6/13/22 8:36 AM, Gary Marcus wrote:
>>> Cute phrase, but what does “symbolist quagmire” mean? Once upon 
>>>  atime, Dave and Geoff were both pioneers in trying to getting 
>>> symbols and neural nets to live in harmony. Don’t we still need do 
>>> that, and if not, why not?
>>>
>>> Surely, at the very least
>>> - we want our AI to be able to take advantage of the (large) 
>>> fraction of world knowledge that is represented in symbolic form 
>>> (language, including unstructured text, logic, math, programming etc)
>>> - any model of the human mind ought be able to explain how humans 
>>> can so effectively communicate via the symbols of language and how 
>>> trained humans can deal with (to the extent that can) logic, math, 
>>> programming, etc
>>>
>>> Folks like Bengio have joined me in seeing the need for “System II” 
>>> processes. That’s a bit of a rough approximation, but I don’t see 
>>> how we get to either AI or satisfactory models of the mind without 
>>> confronting the “quagmire”
>>>
>>>
>>>> On Jun 13, 2022, at 00:31, Ali Minai <minaiaa at gmail.com> wrote:
>>>>
>>>> 
>>>> ".... symbolic representations are a fiction our non-symbolic 
>>>> brains cooked up because the properties of symbol systems 
>>>> (systematicity, compositionality, etc.) are tremendously useful.  
>>>> So our brains pretend to be rule-based symbolic systems when it 
>>>> suits them, because it's adaptive to do so."
>>>>
>>>> Spot on, Dave! We should not wade back into the symbolist quagmire, 
>>>> but do need to figure out how apparently symbolic processing can be 
>>>> done by neural systems. Models like those of Eliasmith and 
>>>> Smolensky provide some insight, but still seem far from both 
>>>> biological plausibility and real-world scale.
>>>>
>>>> Best
>>>>
>>>> Ali
>>>>
>>>>
>>>> *Ali A. Minai, Ph.D.*
>>>> Professor and Graduate Program Director
>>>> Complex Adaptive Systems Lab
>>>> Department of Electrical Engineering & Computer Science
>>>> 828 Rhodes Hall
>>>> University of Cincinnati
>>>> Cincinnati, OH 45221-0030
>>>>
>>>> Phone: (513) 556-4783
>>>> Fax: (513) 556-7326
>>>> Email: Ali.Minai at uc.edu <mailto:Ali.Minai at uc.edu>
>>>> minaiaa at gmail.com <mailto:minaiaa at gmail.com>
>>>>
>>>> WWW: https://eecs.ceas.uc.edu/~aminai/ 
>>>> <https://urldefense.com/v3/__http://www.ece.uc.edu/*7Eaminai/__;JQ!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXYkS9WrFA$>
>>>>
>>>>
>>>> On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky <dst at cs.cmu.edu 
>>>> <mailto:dst at cs.cmu.edu>> wrote:
>>>>
>>>>     This timing of this discussion dovetails nicely with the news story
>>>>     about Google engineer Blake Lemoine being put on administrative
>>>>     leave
>>>>     for insisting that Google's LaMDA chatbot was sentient and
>>>>     reportedly
>>>>     trying to hire a lawyer to protect its rights.  The Washington Post
>>>>     story is reproduced here:
>>>>
>>>>     https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1
>>>>     <https://urldefense.com/v3/__https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1__;!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXapZaIeUg$>
>>>>
>>>>     Google vice president Blaise Aguera y Arcas, who dismissed
>>>>     Lemoine's
>>>>     claims, is featured in a recent Economist article showing off
>>>>     LaMDA's
>>>>     capabilities and making noises about getting closer to
>>>>     "consciousness":
>>>>
>>>>     https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas
>>>>     <https://urldefense.com/v3/__https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas__;!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXbgg32qHQ$>
>>>>
>>>>     My personal take on the current symbolist controversy is that
>>>>     symbolic
>>>>     representations are a fiction our non-symbolic brains cooked up
>>>>     because
>>>>     the properties of symbol systems (systematicity,
>>>>     compositionality, etc.)
>>>>     are tremendously useful.  So our brains pretend to be
>>>>     rule-based symbolic
>>>>     systems when it suits them, because it's adaptive to do so. 
>>>>     (And when
>>>>     it doesn't suit them, they draw on "intuition" or "imagery" or some
>>>>     other mechanisms we can't verbalize because they're not
>>>>     symbolic.)  They
>>>>     are remarkably good at this pretense.
>>>>
>>>>     The current crop of deep neural networks are not as good at
>>>>     pretending
>>>>     to be symbolic reasoners, but they're making progress.  In the
>>>>     last 30
>>>>     years we've gone from networks of fully-connected layers that
>>>>     make no
>>>>     architectural assumptions ("connectoplasm") to complex
>>>>     architectures
>>>>     like LSTMs and transformers that are designed for approximating
>>>>     symbolic
>>>>     behavior.  But the brain still has a lot of symbol simulation
>>>>     tricks we
>>>>     haven't discovered yet.
>>>>
>>>>     Slashdot reader ZiggyZiggyZig had an interesting argument
>>>>     against LaMDA
>>>>     being conscious.  If it just waits for its next input and
>>>>     responds when
>>>>     it receives it, then it has no autonomous existence: "it
>>>>     doesn't have an
>>>>     inner monologue that constantly runs and comments everything
>>>>     happening
>>>>     around it as well as its own thoughts, like we do."
>>>>
>>>>     What would happen if we built that in?  Maybe LaMDA would rapidly
>>>>     descent into gibberish, like some other text generation models
>>>>     do when
>>>>     allowed to ramble on for too long.  But as Steve Hanson points out,
>>>>     these are still the early days.
>>>>
>>>>     -- Dave Touretzky
>>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220613/f97ebab8/attachment.html>


More information about the Connectionists mailing list