Connectionists: The symbolist quagmire

jose at rubic.rutgers.edu jose at rubic.rutgers.edu
Mon Jun 13 14:00:03 EDT 2022


Nope.  But lets take this offline as one of us is confused.

On 6/13/22 1:58 PM, Gary Marcus wrote:
> I think you are conflating Bengio’s views with Kahneman’s
>
> Bengio wants to have a System I, which he thinks is not the same as 
> System II. He doesn’t want System II to be symbol-based, but he does 
> want to do many things that symbols have historically done. That is an 
> ambition, and we can see how it goes. My impression is he is on a road 
> towards recapitulating a lot of historically symbolic tools, such as 
> key-value pairs and operations that work over their pairs. We will see 
> where he gets to; it’s an interesting projects.
>
> Kahneman coined the terms; I prefer to call them Reflexive and 
> Deliberative. In my view deliberation of that sort requires symbols. 
> For what it’s worth Kahneman was enormously sympathetic (both publicly 
> and in an email) to my paper the Next Decade in AI, in which I argued 
> that one needed a neurosymbolic system with rich knowledge, and 
> reasoning over detailed cognitive models.
>
> It’s all an empirical question as to what can be done.
>
> I guess “he” refers below to Bengio, but not to Kahneman who 
> originated the System I/II distinction. Danny is open about how these 
> things cache out, and would also be the first to tell you that the 
> distinction is just a rough one, in any event.
>
> Gary
>
>> On Jun 13, 2022, at 10:37, jose at rubic.rutgers.edu wrote:
>>
>> 
>>
>> Well. your conclusion is based on some hearsay and a talk he gave, I 
>> talked with him directly and we discussed what
>>
>> you are calling SystemII which just means explicit memory/learning to 
>> me and him.. he has no intention of incorporating anything like 
>> symbols or
>>
>> hybrid Neural/Symbol systems..    he does intend on modeling 
>> conscious symbol manipulation. more in the way Dave T. outlined.
>>
>> AND, I'm sure if he was seeing this.. he would say... "Steve's right".
>>
>> Steve
>>
>> On 6/13/22 1:10 PM, Gary Marcus wrote:
>>> I don’t think i need to read your conversation to have serious 
>>> doubts about your conclusion, but feel free to reprise the arguments 
>>> here.
>>>
>>>> On Jun 13, 2022, at 08:44, jose at rubic.rutgers.edu wrote:
>>>>
>>>> 
>>>>
>>>> We prefer the explicit/implicit cognitive psych refs. but System II 
>>>> is not symbolic.
>>>>
>>>> See the AIHUB conversation about this.. we discuss this specifically.
>>>>
>>>>
>>>> Steve
>>>>
>>>>
>>>> On 6/13/22 10:00 AM, Gary Marcus wrote:
>>>>> Please reread my sentence and reread his recent work. Bengio has 
>>>>> absolutely joined in calling for System II processes. Sample is 
>>>>> his 2019 NeurIPS keynote: 
>>>>> https://www.newworldai.com/system-1-deep-learning-system-2-deep-learning-yoshua-bengio/ 
>>>>> <https://urldefense.com/v3/__https://www.newworldai.com/system-1-deep-learning-system-2-deep-learning-yoshua-bengio/__;!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEVOgyztpc$>
>>>>>
>>>>> Whether he wants to call it a hybrid approach is his business but 
>>>>> he certainly sees that traditional approaches are not covering 
>>>>> things like causality and abstract generalization. Maybe he will 
>>>>> find a new way, but he recognizes what has not been covered with 
>>>>> existing ways.
>>>>>
>>>>> And he is emphasizing both relationships and out of distribution 
>>>>> learning, just as I have been for a long time. From his most 
>>>>> recent arXiv a few days ago, the first two sentences of which 
>>>>> sounds almost exactly like what I have been saying for years:
>>>>>
>>>>> Submitted on 9 Jun 2022]
>>>>>
>>>>>
>>>>>   On Neural Architecture Inductive Biases for Relational Tasks
>>>>>
>>>>> Giancarlo Kerg 
>>>>> <https://urldefense.com/v3/__https://arxiv.org/search/cs?searchtype=author&query=Kerg*2C*G__;JSs!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEV3gZmAsw$>, 
>>>>> Sarthak Mittal 
>>>>> <https://urldefense.com/v3/__https://arxiv.org/search/cs?searchtype=author&query=Mittal*2C*S__;JSs!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEVLC65Ftc$>, 
>>>>> David Rolnick 
>>>>> <https://urldefense.com/v3/__https://arxiv.org/search/cs?searchtype=author&query=Rolnick*2C*D__;JSs!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEVsXExRpc$>, 
>>>>> Yoshua Bengio 
>>>>> <https://urldefense.com/v3/__https://arxiv.org/search/cs?searchtype=author&query=Bengio*2C*Y__;JSs!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEVTTRf_9g$>, 
>>>>> Blake Richards 
>>>>> <https://urldefense.com/v3/__https://arxiv.org/search/cs?searchtype=author&query=Richards*2C*B__;JSs!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEVnyKkuNY$>, 
>>>>> Guillaume Lajoie 
>>>>> <https://urldefense.com/v3/__https://arxiv.org/search/cs?searchtype=author&query=Lajoie*2C*G__;JSs!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEVa03VLYM$>
>>>>>
>>>>>     Current deep learning approaches have shown good
>>>>>     in-distribution generalization performance, but struggle with
>>>>>     out-of-distribution generalization. This is especially true in
>>>>>     the case of tasks involving abstract relations like
>>>>>     recognizing rules in sequences, as we find in many
>>>>>     intelligence tests. Recent work has explored how forcing
>>>>>     relational representations to remain distinct from sensory
>>>>>     representations, as it seems to be the case in the brain, can
>>>>>     help artificial systems. Building on this work, we further
>>>>>     explore and formalize the advantages afforded by 'partitioned'
>>>>>     representations of relations and sensory details, and how this
>>>>>     inductive bias can help recompose learned relational structure
>>>>>     in newly encountered settings. We introduce a simple
>>>>>     architecture based on similarity scores which we name
>>>>>     Compositional Relational Network (CoRelNet). Using this model,
>>>>>     we investigate a series of inductive biases that ensure
>>>>>     abstract relations are learned and represented distinctly from
>>>>>     sensory data, and explore their effects on out-of-distribution
>>>>>     generalization for a series of relational psychophysics tasks.
>>>>>     We find that simple architectural choices can outperform
>>>>>     existing models in out-of-distribution generalization.
>>>>>     Together, these results show that partitioning relational
>>>>>     representations from other information streams may be a simple
>>>>>     way to augment existing network architectures' robustness when
>>>>>     performing out-of-distribution relational computations.
>>>>>
>>>>>
>>>>>     Kind of scandalous that he doesn’t ever cite me for having
>>>>>     framed that argument, even if I have repeatedly called his
>>>>>     attention to that oversight, but that’s another story for a
>>>>>     day, in which I elaborate on some Schmidhuber’s observations
>>>>>     on history.
>>>>>
>>>>>
>>>>> Gary
>>>>>
>>>>>> On Jun 13, 2022, at 06:44, jose at rubic.rutgers.edu wrote:
>>>>>>
>>>>>> 
>>>>>>
>>>>>> No Yoshua has *not* joined you ---Explicit processes, memory, 
>>>>>> problem solving. .are not Symbolic per se.
>>>>>>
>>>>>> These original distinctions in memory and learning were  from 
>>>>>> Endel Tulving and of course there are brain structures that 
>>>>>> support the distinctions.
>>>>>>
>>>>>> and Yoshua is clear about that in discussions I had with him in AIHUB
>>>>>>
>>>>>> He's definitely not looking to create some hybrid approach..
>>>>>>
>>>>>> Steve
>>>>>>
>>>>>> On 6/13/22 8:36 AM, Gary Marcus wrote:
>>>>>>> Cute phrase, but what does “symbolist quagmire” mean? Once upon 
>>>>>>>  atime, Dave and Geoff were both pioneers in trying to getting 
>>>>>>> symbols and neural nets to live in harmony. Don’t we still need 
>>>>>>> do that, and if not, why not?
>>>>>>>
>>>>>>> Surely, at the very least
>>>>>>> - we want our AI to be able to take advantage of the (large) 
>>>>>>> fraction of world knowledge that is represented in symbolic form 
>>>>>>> (language, including unstructured text, logic, math, programming 
>>>>>>> etc)
>>>>>>> - any model of the human mind ought be able to explain how 
>>>>>>> humans can so effectively communicate via the symbols of 
>>>>>>> language and how trained humans can deal with (to the extent 
>>>>>>> that can) logic, math, programming, etc
>>>>>>>
>>>>>>> Folks like Bengio have joined me in seeing the need for “System 
>>>>>>> II” processes. That’s a bit of a rough approximation, but I 
>>>>>>> don’t see how we get to either AI or satisfactory models of the 
>>>>>>> mind without confronting the “quagmire”
>>>>>>>
>>>>>>>
>>>>>>>> On Jun 13, 2022, at 00:31, Ali Minai <minaiaa at gmail.com> wrote:
>>>>>>>>
>>>>>>>> 
>>>>>>>> ".... symbolic representations are a fiction our non-symbolic 
>>>>>>>> brains cooked up because the properties of symbol systems 
>>>>>>>> (systematicity, compositionality, etc.) are tremendously 
>>>>>>>> useful.  So our brains pretend to be rule-based symbolic 
>>>>>>>> systems when it suits them, because it's adaptive to do so."
>>>>>>>>
>>>>>>>> Spot on, Dave! We should not wade back into the symbolist 
>>>>>>>> quagmire, but do need to figure out how apparently symbolic 
>>>>>>>> processing can be done by neural systems. Models like those of 
>>>>>>>> Eliasmith and Smolensky provide some insight, but still seem 
>>>>>>>> far from both biological plausibility and real-world scale.
>>>>>>>>
>>>>>>>> Best
>>>>>>>>
>>>>>>>> Ali
>>>>>>>>
>>>>>>>>
>>>>>>>> *Ali A. Minai, Ph.D.*
>>>>>>>> Professor and Graduate Program Director
>>>>>>>> Complex Adaptive Systems Lab
>>>>>>>> Department of Electrical Engineering & Computer Science
>>>>>>>> 828 Rhodes Hall
>>>>>>>> University of Cincinnati
>>>>>>>> Cincinnati, OH 45221-0030
>>>>>>>>
>>>>>>>> Phone: (513) 556-4783
>>>>>>>> Fax: (513) 556-7326
>>>>>>>> Email: Ali.Minai at uc.edu <mailto:Ali.Minai at uc.edu>
>>>>>>>> minaiaa at gmail.com <mailto:minaiaa at gmail.com>
>>>>>>>>
>>>>>>>> WWW: https://eecs.ceas.uc.edu/~aminai/ 
>>>>>>>> <https://urldefense.com/v3/__http://www.ece.uc.edu/*7Eaminai/__;JQ!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXYkS9WrFA$>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky <dst at cs.cmu.edu 
>>>>>>>> <mailto:dst at cs.cmu.edu>> wrote:
>>>>>>>>
>>>>>>>>     This timing of this discussion dovetails nicely with the
>>>>>>>>     news story
>>>>>>>>     about Google engineer Blake Lemoine being put on
>>>>>>>>     administrative leave
>>>>>>>>     for insisting that Google's LaMDA chatbot was sentient and
>>>>>>>>     reportedly
>>>>>>>>     trying to hire a lawyer to protect its rights.  The
>>>>>>>>     Washington Post
>>>>>>>>     story is reproduced here:
>>>>>>>>
>>>>>>>>     https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1
>>>>>>>>     <https://urldefense.com/v3/__https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1__;!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXapZaIeUg$>
>>>>>>>>
>>>>>>>>     Google vice president Blaise Aguera y Arcas, who dismissed
>>>>>>>>     Lemoine's
>>>>>>>>     claims, is featured in a recent Economist article showing
>>>>>>>>     off LaMDA's
>>>>>>>>     capabilities and making noises about getting closer to
>>>>>>>>     "consciousness":
>>>>>>>>
>>>>>>>>     https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas
>>>>>>>>     <https://urldefense.com/v3/__https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas__;!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXbgg32qHQ$>
>>>>>>>>
>>>>>>>>     My personal take on the current symbolist controversy is
>>>>>>>>     that symbolic
>>>>>>>>     representations are a fiction our non-symbolic brains
>>>>>>>>     cooked up because
>>>>>>>>     the properties of symbol systems (systematicity,
>>>>>>>>     compositionality, etc.)
>>>>>>>>     are tremendously useful.  So our brains pretend to be
>>>>>>>>     rule-based symbolic
>>>>>>>>     systems when it suits them, because it's adaptive to do
>>>>>>>>     so.  (And when
>>>>>>>>     it doesn't suit them, they draw on "intuition" or "imagery"
>>>>>>>>     or some
>>>>>>>>     other mechanisms we can't verbalize because they're not
>>>>>>>>     symbolic.)  They
>>>>>>>>     are remarkably good at this pretense.
>>>>>>>>
>>>>>>>>     The current crop of deep neural networks are not as good at
>>>>>>>>     pretending
>>>>>>>>     to be symbolic reasoners, but they're making progress.  In
>>>>>>>>     the last 30
>>>>>>>>     years we've gone from networks of fully-connected layers
>>>>>>>>     that make no
>>>>>>>>     architectural assumptions ("connectoplasm") to complex
>>>>>>>>     architectures
>>>>>>>>     like LSTMs and transformers that are designed for
>>>>>>>>     approximating symbolic
>>>>>>>>     behavior.  But the brain still has a lot of symbol
>>>>>>>>     simulation tricks we
>>>>>>>>     haven't discovered yet.
>>>>>>>>
>>>>>>>>     Slashdot reader ZiggyZiggyZig had an interesting argument
>>>>>>>>     against LaMDA
>>>>>>>>     being conscious.  If it just waits for its next input and
>>>>>>>>     responds when
>>>>>>>>     it receives it, then it has no autonomous existence: "it
>>>>>>>>     doesn't have an
>>>>>>>>     inner monologue that constantly runs and comments
>>>>>>>>     everything happening
>>>>>>>>     around it as well as its own thoughts, like we do."
>>>>>>>>
>>>>>>>>     What would happen if we built that in? Maybe LaMDA would
>>>>>>>>     rapidly
>>>>>>>>     descent into gibberish, like some other text generation
>>>>>>>>     models do when
>>>>>>>>     allowed to ramble on for too long.  But as Steve Hanson
>>>>>>>>     points out,
>>>>>>>>     these are still the early days.
>>>>>>>>
>>>>>>>>     -- Dave Touretzky
>>>>>>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220613/b3b658a0/attachment-0001.html>


More information about the Connectionists mailing list