Connectionists: The symbolist quagmire

Stephen José Hanson jose at rubic.rutgers.edu
Mon Jun 13 14:38:25 EDT 2022


Ali, agreed with all, very nicely stated.   One thing.. though, I 
started out 45 years ago studying animal behavior for exactly the 
reasons you outline below --thought it might be possible to bootstrap 
up.. but connectionism in the 80s seemed to suggest there were common 
elements in computational analysis and models, that were not so 
restricted by species specific behavior.. but more in terms of brain 
complexity.. and here we are 30 years later.. with AI finally coming 
into focus.. as neural blobs.

Not clear what happens next.     I am pretty sure it won't be the 
symbolic quagmire again.

Steve

On 6/13/22 2:22 PM, Ali Minai wrote:
> Gary and Steve
>
> My use of the phrase "symbolic quagmire" referred only to the 
> explicitly symbolic AI models that dominated from the 60s through the 
> 80s. It was not meant to diminish the importance of understanding 
> symbolic processing and how a distributed, self-organizing system like 
> the brain does it. That is absolutely crucial - as long as we let the 
> systems be brain-like, and not force-fit them into our abstract views 
> of symbolic processing (not saying that anyone here is doing that, but 
> some others are).
>
> My own - frankly biased and entirely intuitive - opinion is that once 
> we have a sufficiently brain-like system with the kind of hierarchical 
> modularity we see in the brain, and sufficiently brain-like learning 
> mechanisms in all their aspects (base of evolutionary inductive 
> biases, realized initially through unsupervised learning, fast RL on 
> top of these coupled with development, then - later - supervised 
> learning in a more mature system, learning through internal rehearsal, 
> learning by prediction mismatch/resonance, use of coordination 
> modes/synergies, etc., etc.), processing that we can interpret as 
> symbolic and compositional will emerge naturally. To this end, we can 
> try to infer neural mechanisms underlying this from experiments and 
> theory (as Bengio seems to be doing), but I have a feeling that it 
> will be hard if we focus only on humans and human-levelprocessing. 
> First, it's very hard to do controlled experiments at the required 
> resolution, and second, this is the most complex instance. As Prof. 
> Smith says in the companion thread, we should ask if animals too do 
> what we would regard as symbolic processing, and I think that a case 
> can be made that they do, albeit at a much simpler level. I have long 
> been fascinated by the data suggesting that birds - perhaps even fish 
> - have the concept of numerical order, and even something like a 
> number line. If we could understand how those simpler brains do it, it 
> might be easier to bootstrap up to more complex instances.
>
> Ultimately we'll understand higher cognition by understanding how it 
> evolved from less complex cognition. For example, several people have 
> suggested that abstract representations might be a much more 
> high-dimensional cortical analogs of 2-dimensional hippocampal place 
> representations (2-d in rats - maybe higher-d in primates). That would 
> be consistent with the fact that so much of our abstract reasoning 
> uses spatial and directional metaphors. Re. System I and System II, 
> with all due respect to Kahnemann, that is surely a simplification. If 
> we were to look phylogenetically, we would see the layered emergence 
> of more and more complex minds all the way from the Cambrian to now. 
> The binary I and II division should be replaced by a sequence of 
> systems, though, as with everything is evolution, there are a few 
> major punctuations of transformational "enabling technologies", such 
> as the bilaterian architecture at the start of the Cambrian, the 
> vertebrate architecture, the hippocampus, and the cortex.
>
> Truly hybrid systems - neural networks working in tandem with 
> explicitly symbolic systems - might be a short-term route to 
> addressing specific tasks, but will not give us fundamental insight. 
> That is exactly the kind or "error" that Gary has so correctly 
> attributed to much of current machine learning. I realize that 
> reductionistic analysis and modeling is the standard way we understand 
> systems scientically, but complex systems are resistant to such analysis.
>
> Best
> Ali
>
>
>
> *Ali A. Minai, Ph.D.*
> Professor and Graduate Program Director
> Complex Adaptive Systems Lab
> Department of Electrical Engineering & Computer Science
> 828 Rhodes Hall
> University of Cincinnati
> Cincinnati, OH 45221-0030
>
> Phone: (513) 556-4783
> Fax: (513) 556-7326
> Email: Ali.Minai at uc.edu <mailto:Ali.Minai at uc.edu>
> minaiaa at gmail.com <mailto:minaiaa at gmail.com>
>
> WWW: https://eecs.ceas.uc.edu/~aminai/ <http://www.ece.uc.edu/%7Eaminai/>
>
>
> On Mon, Jun 13, 2022 at 1:37 PM <jose at rubic.rutgers.edu 
> <mailto:jose at rubic.rutgers.edu>> wrote:
>
>     Well. your conclusion is based on some hearsay and a talk he gave,
>     I talked with him directly and we discussed what
>
>     you are calling SystemII which just means explicit memory/learning
>     to me and him.. he has no intention of incorporating anything like
>     symbols or
>
>     hybrid Neural/Symbol systems..    he does intend on modeling
>     conscious symbol manipulation. more in the way Dave T. outlined.
>
>     AND, I'm sure if he was seeing this.. he would say... "Steve's right".
>
>     Steve
>
>     On 6/13/22 1:10 PM, Gary Marcus wrote:
>>     I don’t think i need to read your conversation to have serious
>>     doubts about your conclusion, but feel free to reprise the
>>     arguments here.
>>
>>>     On Jun 13, 2022, at 08:44, jose at rubic.rutgers.edu
>>>     <mailto:jose at rubic.rutgers.edu> wrote:
>>>
>>>     
>>>
>>>     We prefer the explicit/implicit cognitive psych refs. but System
>>>     II is not symbolic.
>>>
>>>     See the AIHUB conversation about this.. we discuss this
>>>     specifically.
>>>
>>>
>>>     Steve
>>>
>>>
>>>     On 6/13/22 10:00 AM, Gary Marcus wrote:
>>>>     Please reread my sentence and reread his recent work. Bengio
>>>>     has absolutely joined in calling for System II processes.
>>>>     Sample is his 2019 NeurIPS keynote:
>>>>     https://www.newworldai.com/system-1-deep-learning-system-2-deep-learning-yoshua-bengio/
>>>>     <https://urldefense.com/v3/__https://www.newworldai.com/system-1-deep-learning-system-2-deep-learning-yoshua-bengio/__;!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEVOgyztpc$>
>>>>
>>>>     Whether he wants to call it a hybrid approach is his business
>>>>     but he certainly sees that traditional approaches are not
>>>>     covering things like causality and abstract generalization.
>>>>     Maybe he will find a new way, but he recognizes what has not
>>>>     been covered with existing ways.
>>>>
>>>>     And he is emphasizing both relationships and out of
>>>>     distribution learning, just as I have been for a long time.
>>>>     From his most recent arXiv a few days ago, the first two
>>>>     sentences of which sounds almost exactly like what I have been
>>>>     saying for years:
>>>>
>>>>     Submitted on 9 Jun 2022]
>>>>
>>>>
>>>>       On Neural Architecture Inductive Biases for Relational Tasks
>>>>
>>>>     Giancarlo Kerg
>>>>     <https://urldefense.com/v3/__https://arxiv.org/search/cs?searchtype=author&query=Kerg*2C*G__;JSs!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEV3gZmAsw$>,
>>>>     Sarthak Mittal
>>>>     <https://urldefense.com/v3/__https://arxiv.org/search/cs?searchtype=author&query=Mittal*2C*S__;JSs!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEVLC65Ftc$>,
>>>>     David Rolnick
>>>>     <https://urldefense.com/v3/__https://arxiv.org/search/cs?searchtype=author&query=Rolnick*2C*D__;JSs!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEVsXExRpc$>,
>>>>     Yoshua Bengio
>>>>     <https://urldefense.com/v3/__https://arxiv.org/search/cs?searchtype=author&query=Bengio*2C*Y__;JSs!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEVTTRf_9g$>,
>>>>     Blake Richards
>>>>     <https://urldefense.com/v3/__https://arxiv.org/search/cs?searchtype=author&query=Richards*2C*B__;JSs!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEVnyKkuNY$>,
>>>>     Guillaume Lajoie
>>>>     <https://urldefense.com/v3/__https://arxiv.org/search/cs?searchtype=author&query=Lajoie*2C*G__;JSs!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEVa03VLYM$>
>>>>
>>>>         Current deep learning approaches have shown good
>>>>         in-distribution generalization performance, but struggle
>>>>         with out-of-distribution generalization. This is especially
>>>>         true in the case of tasks involving abstract relations like
>>>>         recognizing rules in sequences, as we find in many
>>>>         intelligence tests. Recent work has explored how forcing
>>>>         relational representations to remain distinct from sensory
>>>>         representations, as it seems to be the case in the brain,
>>>>         can help artificial systems. Building on this work, we
>>>>         further explore and formalize the advantages afforded by
>>>>         'partitioned' representations of relations and sensory
>>>>         details, and how this inductive bias can help recompose
>>>>         learned relational structure in newly encountered settings.
>>>>         We introduce a simple architecture based on similarity
>>>>         scores which we name Compositional Relational Network
>>>>         (CoRelNet). Using this model, we investigate a series of
>>>>         inductive biases that ensure abstract relations are learned
>>>>         and represented distinctly from sensory data, and explore
>>>>         their effects on out-of-distribution generalization for a
>>>>         series of relational psychophysics tasks. We find that
>>>>         simple architectural choices can outperform existing models
>>>>         in out-of-distribution generalization. Together, these
>>>>         results show that partitioning relational representations
>>>>         from other information streams may be a simple way to
>>>>         augment existing network architectures' robustness when
>>>>         performing out-of-distribution relational computations.
>>>>
>>>>
>>>>         Kind of scandalous that he doesn’t ever cite me for having
>>>>         framed that argument, even if I have repeatedly called his
>>>>         attention to that oversight, but that’s another story for a
>>>>         day, in which I elaborate on some Schmidhuber’s
>>>>         observations on history.
>>>>
>>>>
>>>>     Gary
>>>>
>>>>>     On Jun 13, 2022, at 06:44, jose at rubic.rutgers.edu
>>>>>     <mailto:jose at rubic.rutgers.edu> wrote:
>>>>>
>>>>>     
>>>>>
>>>>>     No Yoshua has *not* joined you ---Explicit processes, memory,
>>>>>     problem solving. .are not Symbolic per se.
>>>>>
>>>>>     These original distinctions in memory and learning were  from
>>>>>     Endel Tulving and of course there are brain structures that
>>>>>     support the distinctions.
>>>>>
>>>>>     and Yoshua is clear about that in discussions I had with him
>>>>>     in AIHUB
>>>>>
>>>>>     He's definitely not looking to create some hybrid approach..
>>>>>
>>>>>     Steve
>>>>>
>>>>>     On 6/13/22 8:36 AM, Gary Marcus wrote:
>>>>>>     Cute phrase, but what does “symbolist quagmire” mean? Once
>>>>>>     upon  atime, Dave and Geoff were both pioneers in trying to
>>>>>>     getting symbols and neural nets to live in harmony. Don’t we
>>>>>>     still need do that, and if not, why not?
>>>>>>
>>>>>>     Surely, at the very least
>>>>>>     - we want our AI to be able to take advantage of the (large)
>>>>>>     fraction of world knowledge that is represented in symbolic
>>>>>>     form (language, including unstructured text, logic, math,
>>>>>>     programming etc)
>>>>>>     - any model of the human mind ought be able to explain how
>>>>>>     humans can so effectively communicate via the symbols of
>>>>>>     language and how trained humans can deal with (to the extent
>>>>>>     that can) logic, math, programming, etc
>>>>>>
>>>>>>     Folks like Bengio have joined me in seeing the need for
>>>>>>     “System II” processes. That’s a bit of a rough approximation,
>>>>>>     but I don’t see how we get to either AI or satisfactory
>>>>>>     models of the mind without confronting the “quagmire”
>>>>>>
>>>>>>
>>>>>>>     On Jun 13, 2022, at 00:31, Ali Minai <minaiaa at gmail.com>
>>>>>>>     <mailto:minaiaa at gmail.com> wrote:
>>>>>>>
>>>>>>>     
>>>>>>>     ".... symbolic representations are a fiction our
>>>>>>>     non-symbolic brains cooked up because the properties of
>>>>>>>     symbol systems (systematicity, compositionality, etc.) are
>>>>>>>     tremendously useful.  So our brains pretend to be rule-based
>>>>>>>     symbolic systems when it suits them, because it's adaptive
>>>>>>>     to do so."
>>>>>>>
>>>>>>>     Spot on, Dave! We should not wade back into the symbolist
>>>>>>>     quagmire, but do need to figure out how apparently symbolic
>>>>>>>     processing can be done by neural systems. Models like those
>>>>>>>     of Eliasmith and Smolensky provide some insight, but still
>>>>>>>     seem far from both biological plausibility and real-world scale.
>>>>>>>
>>>>>>>     Best
>>>>>>>
>>>>>>>     Ali
>>>>>>>
>>>>>>>
>>>>>>>     *Ali A. Minai, Ph.D.*
>>>>>>>     Professor and Graduate Program Director
>>>>>>>     Complex Adaptive Systems Lab
>>>>>>>     Department of Electrical Engineering & Computer Science
>>>>>>>     828 Rhodes Hall
>>>>>>>     University of Cincinnati
>>>>>>>     Cincinnati, OH 45221-0030
>>>>>>>
>>>>>>>     Phone: (513) 556-4783
>>>>>>>     Fax: (513) 556-7326
>>>>>>>     Email: Ali.Minai at uc.edu <mailto:Ali.Minai at uc.edu>
>>>>>>>     minaiaa at gmail.com <mailto:minaiaa at gmail.com>
>>>>>>>
>>>>>>>     WWW: https://eecs.ceas.uc.edu/~aminai/
>>>>>>>     <https://urldefense.com/v3/__http://www.ece.uc.edu/*7Eaminai/__;JQ!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXYkS9WrFA$>
>>>>>>>
>>>>>>>
>>>>>>>     On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky
>>>>>>>     <dst at cs.cmu.edu <mailto:dst at cs.cmu.edu>> wrote:
>>>>>>>
>>>>>>>         This timing of this discussion dovetails nicely with the
>>>>>>>         news story
>>>>>>>         about Google engineer Blake Lemoine being put on
>>>>>>>         administrative leave
>>>>>>>         for insisting that Google's LaMDA chatbot was sentient
>>>>>>>         and reportedly
>>>>>>>         trying to hire a lawyer to protect its rights.  The
>>>>>>>         Washington Post
>>>>>>>         story is reproduced here:
>>>>>>>
>>>>>>>         https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1
>>>>>>>         <https://urldefense.com/v3/__https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1__;!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXapZaIeUg$>
>>>>>>>
>>>>>>>         Google vice president Blaise Aguera y Arcas, who
>>>>>>>         dismissed Lemoine's
>>>>>>>         claims, is featured in a recent Economist article
>>>>>>>         showing off LaMDA's
>>>>>>>         capabilities and making noises about getting closer to
>>>>>>>         "consciousness":
>>>>>>>
>>>>>>>         https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas
>>>>>>>         <https://urldefense.com/v3/__https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas__;!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXbgg32qHQ$>
>>>>>>>
>>>>>>>         My personal take on the current symbolist controversy is
>>>>>>>         that symbolic
>>>>>>>         representations are a fiction our non-symbolic brains
>>>>>>>         cooked up because
>>>>>>>         the properties of symbol systems (systematicity,
>>>>>>>         compositionality, etc.)
>>>>>>>         are tremendously useful.  So our brains pretend to be
>>>>>>>         rule-based symbolic
>>>>>>>         systems when it suits them, because it's adaptive to do
>>>>>>>         so.  (And when
>>>>>>>         it doesn't suit them, they draw on "intuition" or
>>>>>>>         "imagery" or some
>>>>>>>         other mechanisms we can't verbalize because they're not
>>>>>>>         symbolic.)  They
>>>>>>>         are remarkably good at this pretense.
>>>>>>>
>>>>>>>         The current crop of deep neural networks are not as good
>>>>>>>         at pretending
>>>>>>>         to be symbolic reasoners, but they're making progress. 
>>>>>>>         In the last 30
>>>>>>>         years we've gone from networks of fully-connected layers
>>>>>>>         that make no
>>>>>>>         architectural assumptions ("connectoplasm") to complex
>>>>>>>         architectures
>>>>>>>         like LSTMs and transformers that are designed for
>>>>>>>         approximating symbolic
>>>>>>>         behavior.  But the brain still has a lot of symbol
>>>>>>>         simulation tricks we
>>>>>>>         haven't discovered yet.
>>>>>>>
>>>>>>>         Slashdot reader ZiggyZiggyZig had an interesting
>>>>>>>         argument against LaMDA
>>>>>>>         being conscious.  If it just waits for its next input
>>>>>>>         and responds when
>>>>>>>         it receives it, then it has no autonomous existence: "it
>>>>>>>         doesn't have an
>>>>>>>         inner monologue that constantly runs and comments
>>>>>>>         everything happening
>>>>>>>         around it as well as its own thoughts, like we do."
>>>>>>>
>>>>>>>         What would happen if we built that in?  Maybe LaMDA
>>>>>>>         would rapidly
>>>>>>>         descent into gibberish, like some other text generation
>>>>>>>         models do when
>>>>>>>         allowed to ramble on for too long. But as Steve Hanson
>>>>>>>         points out,
>>>>>>>         these are still the early days.
>>>>>>>
>>>>>>>         -- Dave Touretzky
>>>>>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220613/c5f3a0c0/attachment.html>


More information about the Connectionists mailing list