Connectionists: The symbolist quagmire

Ali Minai minaiaa at gmail.com
Wed Jun 15 00:11:16 EDT 2022


Hi Asim

That's great. Each blink is a data point, but what does the brain do with
it? Calculate gradients across layers and use minibatches? The data point
is gone instantly, never to be iterated over, except any part that the
hippocampus may have grabbed as an episodic memory and can make available
for later replay. We need to understand how this works and how it can be
instantiated in learning algorithms. To be fair, in the special case of
(early) vision, I think we have a pretty reasonable idea. It's more
interesting to think of why we can figure out how to do fairly complicated
things of diverse modalities after watching someone do them once - or
never. That integrated understanding of the world and the ability to
exploit it opportunistically and pervasively is the thing that makes an
animal intelligent. Are we heading that way, or are we focusing too much on
a few very specific problems. I really think that the best AI work in the
long term will come from those who work with robots that experience the
world in an integrated way. Maybe multi-modal learning will get us part of
the way there, but not if it needs so much training.

Anyway, I know that many people are already thinking about these things and
trying to address them, so let's see where things go. Thanks for the
stimulating discussion.

Best
Ali



*Ali A. Minai, Ph.D.*
Professor and Graduate Program Director
Complex Adaptive Systems Lab
Department of Electrical Engineering & Computer Science
828 Rhodes Hall
University of Cincinnati
Cincinnati, OH 45221-0030

Phone: (513) 556-4783
Fax: (513) 556-7326
Email: Ali.Minai at uc.edu
          minaiaa at gmail.com

WWW: https://eecs.ceas.uc.edu/~aminai/ <http://www.ece.uc.edu/%7Eaminai/>


On Tue, Jun 14, 2022 at 7:10 PM Asim Roy <ASIM.ROY at asu.edu> wrote:

> Hi Ali,
>
>
>
> Of course the development phase is mostly unsupervised and I know there is
> ongoing work in that area that I don’t keep up with.
>
>
>
> On the large amount of data required to train the deep learning models:
>
>
>
> I spent my sabbatical in 1991 with David Rumelhart and Bernie Widrow at
> Stanford. And Bernie and I became quite close after attending his class
> that quarter. I usually used to walk back with Bernie after his class. One
> day I did ask where does all this data come from to train the brain? His
> reply was - every blink of the eye generates a datapoint.
>
>
>
> Best,
>
> Asim
>
>
>
> *From:* Ali Minai <minaiaa at gmail.com>
> *Sent:* Tuesday, June 14, 2022 3:43 PM
> *To:* Asim Roy <ASIM.ROY at asu.edu>
> *Cc:* Connectionists List <connectionists at cs.cmu.edu>; Gary Marcus <
> gary.marcus at nyu.edu>; Geoffrey Hinton <geoffrey.hinton at gmail.com>; Yoshua
> Bengio <yoshua.bengio at mila.quebec>
> *Subject:* Re: Connectionists: The symbolist quagmire
>
>
>
> Hi Asim
>
>
>
> I have no issue with neurons or groups of neurons tuned to concepts.
> Clearly, abstract concepts and the equivalent of symbolic computation are
> represented somehow. Amodal representations have also been known for a long
> time. As someone who has worked on the hippocampus and models of thought
> for a long time, I don't need much convincing on that. The issue is how a
> self-organizing complex system like the brain comes by these
> representations. I think it does so by building on the substrate of
> inductive biases - priors - configured by evolution and a developmental
> learning process. We just try to cram everything into neural learning,
> which is a main cause of the "problems" associated with deep learning.
> They're problems only if you're trying to attain  general intelligence of
> the natural kind, perhaps not so much for applications.
>
>
>
> Of course you have to start simple, but, so far, I have not seen any
> simple model truly scale up to the real world without: a) Major tinkering
> with its original principles; b) Lots of data and training; and c) Still
> being focused on a narrow task. When this approach shows us how to build an
> AI that can walk, chew gum, do math, and understand a poem using a single
> brain, then we'll have something like real human-level AI. Heck, if it can
> just spin a web in an appropriate place, hide in wait for prey, and make
> sure it eats its mate only after sex, I would even consider that
> intelligent :-).
>
>
>
> Here's the thing: Teaching a sufficiently complicated neural system a very
> complex task with lots of data and supervised training is an interesting
> engineering problem but doesn't get us to intelligence. Yes, a network can
> learn grammar with supervised learning, but none of us learn it that way.
> Nor do the other animals that have simpler grammars embedded in their
> communication. My view is that if it is not autonomously self-organizing at
> a fundamental level, it is not intelligence but just a simulation of
> intelligence. Of course, we humans do use supervised learning, but it is a
> "late stage" mechanism. It works only when the system has first
> self-organized autonomously to develop the capabilities that can act as a
> substrate for supervised learning. Learning to play the piano, learning to
> do math, learning calligraphy - all these have an important supervised
> component, but they work only after perceptual, sensorimotor, and cognitive
> functions have been learned through self-organization, imitation, rapid
> reinforcement, internal rehearsal, mismatch-based learning, etc. I think
> methods like SOFM, ART, and RBMs are closer to what we need than behemoths
> trained with gradient descent. We just have to find more efficient versions
> of them. And in this, I always return to Dobzhansky's maxim: Nothing in
> biology makes sense except in the light of evolution. Intelligence is a
> biological phenomenon; we'll understand it by paying attention to how it
> evolved (not by trying to replicate evolution, of course!) And the same
> goes for development. I think we understand natural phenomena by studying
> Nature respectfully, not by trying to out-think it based on our still very
> limited knowledge - not that it keeps any of us, myself included, from
> doing exactly that! I am not as familiar with your work as I should be, but
> I admire the fact that you're approaching things with principles rather
> than building larger and larger Rube Goldberg contraptions tuned to narrow
> tasks. I do think, however, that if we ever get to truly mammalian-level
> AI, it will not be anywhere close to fully explainable. Nor will it be a
> slave only to our purposes.
>
>
>
> Cheers
>
> Ali
>
>
>
>
>
> *Ali A. Minai, Ph.D.*
> Professor and Graduate Program Director
> Complex Adaptive Systems Lab
> Department of Electrical Engineering & Computer Science
>
> 828 Rhodes Hall
>
> University of Cincinnati
> Cincinnati, OH 45221-0030
>
>
> Phone: (513) 556-4783
> Fax: (513) 556-7326
> Email: Ali.Minai at uc.edu
>           minaiaa at gmail.com
>
> WWW: https://eecs.ceas.uc.edu/~aminai/
> <https://urldefense.com/v3/__http:/www.ece.uc.edu/*7Eaminai/__;JQ!!IKRxdwAv5BmarQ!bGTjyoxg06xqxEToPJr0XZRB7nthWK8TuFENaZPC8N14H3DOzGHNTDVYk5irvXYZz5b4w-IxrdM1bPg$>
>
>
>
>
>
> On Tue, Jun 14, 2022 at 5:17 PM Asim Roy <ASIM.ROY at asu.edu> wrote:
>
> Hi Ali,
>
>
>
>    1. It’s important to understand that there is plenty of
>    neurophysiological evidence for abstractions at the single cell level in
>    the brain. Thus, symbolic representation in the brain is not a fiction any
>    more. We are past that argument.
>    2. You always start with simple systems before you do the complex
>    ones. Having said that, we do teach our systems composition – composition
>    of objects from parts in images. That is almost like teaching grammar or
>    solving a puzzle. I don’t get into language models, but I think grammar and
>    composition can be easily taught, like you teach a kid.
>    3. Once you know how to build these simple models and extract symbols,
>    you can easily scale up and build hierarchical, multi-modal, compositional
>    models. Thus, in the case of images, after having learnt that cats, dogs
>    and similar animals have certain common features (eyes, legs, ears), it can
>    easily generalize the concept to four-legged animals. We haven’t done it,
>    but that could be the next level of learning.
>
>
>
> In general, once you extract symbols from these deep learning models, you
> are at the symbolic level and you have a pathway to more complex,
> hierarchical models and perhaps also to AGI.
>
>
>
> Best,
>
> Asim
>
>
>
> Asim Roy
>
> Professor, Information Systems
>
> Arizona State University
>
> Lifeboat Foundation Bios: Professor Asim Roy
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e=>
>
> Asim Roy | iSearch (asu.edu)
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e=>
>
>
>
>
>
> *From:* Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> *On
> Behalf Of *Ali Minai
> *Sent:* Monday, June 13, 2022 10:57 PM
> *To:* Connectionists List <connectionists at cs.cmu.edu>
> *Subject:* Re: Connectionists: The symbolist quagmire
>
>
>
> Asim
>
>
>
> This is really interesting work, but learning concept representations from
> sensory data is not enough. They must be hierarchical, multi-modal,
> compositional, and integrated with the motor system, the limbic system,
> etc., in a way that facilitates an infinity of useful behaviors. This is
> perhaps a good step in that direction, but only a small one. Its main
> immediate utility is in using deep learning networks in tasks that can be
> explained to users and customers. While very useful, that is not a central
> issue in AI, which focuses on intelligent behavior. All else is in service
> to that - explainable or not. However, I do think that the kind of
> hierarchical modularity implied in these representations is probably part
> of the brain's repertoire, and that is important.
>
>
>
> Best
>
> Ali
>
>
>
> *Ali A. Minai, Ph.D.*
> Professor and Graduate Program Director
> Complex Adaptive Systems Lab
> Department of Electrical Engineering & Computer Science
>
> 828 Rhodes Hall
>
> University of Cincinnati
> Cincinnati, OH 45221-0030
>
>
> Phone: (513) 556-4783
> Fax: (513) 556-7326
> Email: Ali.Minai at uc.edu
>           minaiaa at gmail.com
>
> WWW: https://eecs.ceas.uc.edu/~aminai/
> <https://urldefense.com/v3/__http:/www.ece.uc.edu/*7Eaminai/__;JQ!!IKRxdwAv5BmarQ!akY1pgZJRzcXt2oX5-mgNHeYElh5ZeIj69F33aXnl3bIHR-9LHpwfmP61TPYZRIMInwxaEHSrSV9ekY$>
>
>
>
>
>
> On Mon, Jun 13, 2022 at 7:48 PM Asim Roy <ASIM.ROY at asu.edu> wrote:
>
> There’s a lot of misconceptions about (1) whether the brain uses symbols
> or not, and (2) whether we need symbol processing in our systems or not.
>
>
>
>    1. Multisensory neurons are widely used in the brain. Leila Reddy and
>    Simon Thorpe are not known to be wildly crazy about arguing that symbols
>    exist in the brain, but their characterizations of concept cells  (which
>    are multisensory neurons) (
>    https://www.sciencedirect.com/science/article/pii/S0896627314009027#
>    <https://urldefense.com/v3/__https:/www.sciencedirect.com/science/article/pii/S0896627314009027*__;Iw!!IKRxdwAv5BmarQ!akY1pgZJRzcXt2oX5-mgNHeYElh5ZeIj69F33aXnl3bIHR-9LHpwfmP61TPYZRIMInwxaEHS0uZ4RBM$>
>    !) state that concept cells have “*meaning** of a given stimulus in a
>    manner that is invariant to different representations of that stimulus*.”
>    They associate concept cells with the properties of “*Selectivity or
>    specificity*,” “*complex concept*,” “*meaning*,” “*multimodal
>    invariance*” and “*abstractness*.” That pretty much says that concept
>    cells represent symbols. And there are plenty of concept cells in the
>    medial temporal lobe (MTL). The brain is a highly abstract system based on
>    symbols. There is no fiction there.
>
>
>
>    1. There is ongoing work in the deep learning area that is trying to
>    associate a single neuron or a group of neurons with a single concept.
>    Bengio’s work is definitely in that direction:
>
>
>
> “*Finally, our recent work on learning high-level 'system-2'-like
> representations and their causal dependencies seeks to learn
> 'interpretable' entities (with natural language) that will emerge at the
> highest levels of representation (not clear how distributed or local these
> will be, but much more local than in a traditional MLP). This is a
> different form of disentangling than adopted in much of the recent work on
> unsupervised representation learning but shares the idea that the "right"
> abstract concept (related to those we can name verbally) will be
> "separated" (disentangled) from each other (which suggests that
> neuroscientists will have an easier time spotting them in neural
> activity).”*
>
> Hinton’s GLOM, which extends the idea of capsules to do part-whole
> hierarchies for scene analysis using the parse tree concept, is also about
> associating a concept with a set of neurons. While Bengio and Hinton are
> trying to construct these “concept cells” within the network (the CNN), we
> found that this can be done much more easily and in a straight forward way
> outside the network. We can easily decode a CNN to find the encodings for
> legs, ears and so on for cats and dogs and what not. What the DARPA
> Explainable AI program was looking for was a symbolic-emitting model of the
> form shown below. And we can easily get to that symbolic model by decoding
> a CNN. In addition, the side benefit of such a symbolic model is protection
> against adversarial attacks. So a school bus will never turn into an
> ostrich with the tweaks of a few pixels if you can verify parts of objects.
> To be an ostrich, you need have those long legs, the long neck and the
> small head. A school bus lacks those parts. The DARPA conceptualized
> symbolic model provides that protection.
>
>
>
> In general, there is convergence between connectionist and symbolic
> systems. We need to get past the old wars. It’s over.
>
>
>
> All the best,
>
> Asim Roy
>
> Professor, Information Systems
>
> Arizona State University
>
> Lifeboat Foundation Bios: Professor Asim Roy
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e=>
>
> Asim Roy | iSearch (asu.edu)
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e=>
>
>
>
> [image: Timeline Description automatically generated]
>
>
>
>
>
> *From:* Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> *On
> Behalf Of *Gary Marcus
> *Sent:* Monday, June 13, 2022 5:36 AM
> *To:* Ali Minai <minaiaa at gmail.com>
> *Cc:* Connectionists List <connectionists at cs.cmu.edu>
> *Subject:* Connectionists: The symbolist quagmire
>
>
>
> Cute phrase, but what does “symbolist quagmire” mean? Once upon  atime,
> Dave and Geoff were both pioneers in trying to getting symbols and neural
> nets to live in harmony. Don’t we still need do that, and if not, why not?
>
>
>
> Surely, at the very least
>
> - we want our AI to be able to take advantage of the (large) fraction of
> world knowledge that is represented in symbolic form (language, including
> unstructured text, logic, math, programming etc)
>
> - any model of the human mind ought be able to explain how humans can so
> effectively communicate via the symbols of language and how trained humans
> can deal with (to the extent that can) logic, math, programming, etc
>
>
>
> Folks like Bengio have joined me in seeing the need for “System II”
> processes. That’s a bit of a rough approximation, but I don’t see how we
> get to either AI or satisfactory models of the mind without confronting the
> “quagmire”
>
>
>
>
>
> On Jun 13, 2022, at 00:31, Ali Minai <minaiaa at gmail.com> wrote:
>
> 
>
> ".... symbolic representations are a fiction our non-symbolic brains
> cooked up because the properties of symbol systems (systematicity,
> compositionality, etc.) are tremendously useful.  So our brains pretend to
> be rule-based symbolic systems when it suits them, because it's adaptive to
> do so."
>
>
>
> Spot on, Dave! We should not wade back into the symbolist quagmire, but do
> need to figure out how apparently symbolic processing can be done by neural
> systems. Models like those of Eliasmith and Smolensky provide some insight,
> but still seem far from both biological plausibility and real-world scale.
>
>
>
> Best
>
>
>
> Ali
>
>
>
>
>
> *Ali A. Minai, Ph.D.*
> Professor and Graduate Program Director
> Complex Adaptive Systems Lab
> Department of Electrical Engineering & Computer Science
>
> 828 Rhodes Hall
>
> University of Cincinnati
> Cincinnati, OH 45221-0030
>
>
> Phone: (513) 556-4783
> Fax: (513) 556-7326
> Email: Ali.Minai at uc.edu
>           minaiaa at gmail.com
>
> WWW: https://eecs.ceas.uc.edu/~aminai/
> <https://urldefense.com/v3/__http:/www.ece.uc.edu/*7Eaminai/__;JQ!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXYkS9WrFA$>
>
>
>
>
>
> On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky <dst at cs.cmu.edu> wrote:
>
> This timing of this discussion dovetails nicely with the news story
> about Google engineer Blake Lemoine being put on administrative leave
> for insisting that Google's LaMDA chatbot was sentient and reportedly
> trying to hire a lawyer to protect its rights.  The Washington Post
> story is reproduced here:
>
>
> https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1
> <https://urldefense.com/v3/__https:/www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1__;!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXapZaIeUg$>
>
> Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's
> claims, is featured in a recent Economist article showing off LaMDA's
> capabilities and making noises about getting closer to "consciousness":
>
>
> https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas
> <https://urldefense.com/v3/__https:/www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas__;!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXbgg32qHQ$>
>
> My personal take on the current symbolist controversy is that symbolic
> representations are a fiction our non-symbolic brains cooked up because
> the properties of symbol systems (systematicity, compositionality, etc.)
> are tremendously useful.  So our brains pretend to be rule-based symbolic
> systems when it suits them, because it's adaptive to do so.  (And when
> it doesn't suit them, they draw on "intuition" or "imagery" or some
> other mechanisms we can't verbalize because they're not symbolic.)  They
> are remarkably good at this pretense.
>
> The current crop of deep neural networks are not as good at pretending
> to be symbolic reasoners, but they're making progress.  In the last 30
> years we've gone from networks of fully-connected layers that make no
> architectural assumptions ("connectoplasm") to complex architectures
> like LSTMs and transformers that are designed for approximating symbolic
> behavior.  But the brain still has a lot of symbol simulation tricks we
> haven't discovered yet.
>
> Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA
> being conscious.  If it just waits for its next input and responds when
> it receives it, then it has no autonomous existence: "it doesn't have an
> inner monologue that constantly runs and comments everything happening
> around it as well as its own thoughts, like we do."
>
> What would happen if we built that in?  Maybe LaMDA would rapidly
> descent into gibberish, like some other text generation models do when
> allowed to ramble on for too long.  But as Steve Hanson points out,
> these are still the early days.
>
> -- Dave Touretzky
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220615/089934b2/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 259567 bytes
Desc: not available
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220615/089934b2/attachment.png>


More information about the Connectionists mailing list