Connectionists: The symbolist quagmire

Ali Minai minaiaa at gmail.com
Tue Jun 14 01:57:24 EDT 2022


Asim

This is really interesting work, but learning concept representations from
sensory data is not enough. They must be hierarchical, multi-modal,
compositional, and integrated with the motor system, the limbic system,
etc., in a way that facilitates an infinity of useful behaviors. This is
perhaps a good step in that direction, but only a small one. Its main
immediate utility is in using deep learning networks in tasks that can be
explained to users and customers. While very useful, that is not a central
issue in AI, which focuses on intelligent behavior. All else is in service
to that - explainable or not. However, I do think that the kind of
hierarchical modularity implied in these representations is probably part
of the brain's repertoire, and that is important.

Best
Ali

*Ali A. Minai, Ph.D.*
Professor and Graduate Program Director
Complex Adaptive Systems Lab
Department of Electrical Engineering & Computer Science
828 Rhodes Hall
University of Cincinnati
Cincinnati, OH 45221-0030

Phone: (513) 556-4783
Fax: (513) 556-7326
Email: Ali.Minai at uc.edu
          minaiaa at gmail.com

WWW: https://eecs.ceas.uc.edu/~aminai/ <http://www.ece.uc.edu/%7Eaminai/>


On Mon, Jun 13, 2022 at 7:48 PM Asim Roy <ASIM.ROY at asu.edu> wrote:

> There’s a lot of misconceptions about (1) whether the brain uses symbols
> or not, and (2) whether we need symbol processing in our systems or not.
>
>
>
>    1. Multisensory neurons are widely used in the brain. Leila Reddy and
>    Simon Thorpe are not known to be wildly crazy about arguing that symbols
>    exist in the brain, but their characterizations of concept cells  (which
>    are multisensory neurons) (
>    https://www.sciencedirect.com/science/article/pii/S0896627314009027#!)
>    state that concept cells have “*meaning** of a given stimulus in a
>    manner that is invariant to different representations of that stimulus*.”
>    They associate concept cells with the properties of “*Selectivity or
>    specificity*,” “*complex concept*,” “*meaning*,” “*multimodal
>    invariance*” and “*abstractness*.” That pretty much says that concept
>    cells represent symbols. And there are plenty of concept cells in the
>    medial temporal lobe (MTL). The brain is a highly abstract system based on
>    symbols. There is no fiction there.
>
>
>
>    1. There is ongoing work in the deep learning area that is trying to
>    associate a single neuron or a group of neurons with a single concept.
>    Bengio’s work is definitely in that direction:
>
>
>
> “*Finally, our recent work on learning high-level 'system-2'-like
> representations and their causal dependencies seeks to learn
> 'interpretable' entities (with natural language) that will emerge at the
> highest levels of representation (not clear how distributed or local these
> will be, but much more local than in a traditional MLP). This is a
> different form of disentangling than adopted in much of the recent work on
> unsupervised representation learning but shares the idea that the "right"
> abstract concept (related to those we can name verbally) will be
> "separated" (disentangled) from each other (which suggests that
> neuroscientists will have an easier time spotting them in neural
> activity).”*
>
> Hinton’s GLOM, which extends the idea of capsules to do part-whole
> hierarchies for scene analysis using the parse tree concept, is also about
> associating a concept with a set of neurons. While Bengio and Hinton are
> trying to construct these “concept cells” within the network (the CNN), we
> found that this can be done much more easily and in a straight forward way
> outside the network. We can easily decode a CNN to find the encodings for
> legs, ears and so on for cats and dogs and what not. What the DARPA
> Explainable AI program was looking for was a symbolic-emitting model of the
> form shown below. And we can easily get to that symbolic model by decoding
> a CNN. In addition, the side benefit of such a symbolic model is protection
> against adversarial attacks. So a school bus will never turn into an
> ostrich with the tweaks of a few pixels if you can verify parts of objects.
> To be an ostrich, you need have those long legs, the long neck and the
> small head. A school bus lacks those parts. The DARPA conceptualized
> symbolic model provides that protection.
>
>
>
> In general, there is convergence between connectionist and symbolic
> systems. We need to get past the old wars. It’s over.
>
>
>
> All the best,
>
> Asim Roy
>
> Professor, Information Systems
>
> Arizona State University
>
> Lifeboat Foundation Bios: Professor Asim Roy
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e=>
>
> Asim Roy | iSearch (asu.edu)
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e=>
>
>
>
> [image: Timeline Description automatically generated]
>
>
>
>
>
> *From:* Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> *On
> Behalf Of *Gary Marcus
> *Sent:* Monday, June 13, 2022 5:36 AM
> *To:* Ali Minai <minaiaa at gmail.com>
> *Cc:* Connectionists List <connectionists at cs.cmu.edu>
> *Subject:* Connectionists: The symbolist quagmire
>
>
>
> Cute phrase, but what does “symbolist quagmire” mean? Once upon  atime,
> Dave and Geoff were both pioneers in trying to getting symbols and neural
> nets to live in harmony. Don’t we still need do that, and if not, why not?
>
>
>
> Surely, at the very least
>
> - we want our AI to be able to take advantage of the (large) fraction of
> world knowledge that is represented in symbolic form (language, including
> unstructured text, logic, math, programming etc)
>
> - any model of the human mind ought be able to explain how humans can so
> effectively communicate via the symbols of language and how trained humans
> can deal with (to the extent that can) logic, math, programming, etc
>
>
>
> Folks like Bengio have joined me in seeing the need for “System II”
> processes. That’s a bit of a rough approximation, but I don’t see how we
> get to either AI or satisfactory models of the mind without confronting the
> “quagmire”
>
>
>
>
>
> On Jun 13, 2022, at 00:31, Ali Minai <minaiaa at gmail.com> wrote:
>
> 
>
> ".... symbolic representations are a fiction our non-symbolic brains
> cooked up because the properties of symbol systems (systematicity,
> compositionality, etc.) are tremendously useful.  So our brains pretend to
> be rule-based symbolic systems when it suits them, because it's adaptive to
> do so."
>
>
>
> Spot on, Dave! We should not wade back into the symbolist quagmire, but do
> need to figure out how apparently symbolic processing can be done by neural
> systems. Models like those of Eliasmith and Smolensky provide some insight,
> but still seem far from both biological plausibility and real-world scale.
>
>
>
> Best
>
>
>
> Ali
>
>
>
>
>
> *Ali A. Minai, Ph.D.*
> Professor and Graduate Program Director
> Complex Adaptive Systems Lab
> Department of Electrical Engineering & Computer Science
>
> 828 Rhodes Hall
>
> University of Cincinnati
> Cincinnati, OH 45221-0030
>
>
> Phone: (513) 556-4783
> Fax: (513) 556-7326
> Email: Ali.Minai at uc.edu
>           minaiaa at gmail.com
>
> WWW: https://eecs.ceas.uc.edu/~aminai/
> <https://urldefense.com/v3/__http:/www.ece.uc.edu/*7Eaminai/__;JQ!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXYkS9WrFA$>
>
>
>
>
>
> On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky <dst at cs.cmu.edu> wrote:
>
> This timing of this discussion dovetails nicely with the news story
> about Google engineer Blake Lemoine being put on administrative leave
> for insisting that Google's LaMDA chatbot was sentient and reportedly
> trying to hire a lawyer to protect its rights.  The Washington Post
> story is reproduced here:
>
>
> https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1
> <https://urldefense.com/v3/__https:/www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1__;!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXapZaIeUg$>
>
> Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's
> claims, is featured in a recent Economist article showing off LaMDA's
> capabilities and making noises about getting closer to "consciousness":
>
>
> https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas
> <https://urldefense.com/v3/__https:/www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas__;!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXbgg32qHQ$>
>
> My personal take on the current symbolist controversy is that symbolic
> representations are a fiction our non-symbolic brains cooked up because
> the properties of symbol systems (systematicity, compositionality, etc.)
> are tremendously useful.  So our brains pretend to be rule-based symbolic
> systems when it suits them, because it's adaptive to do so.  (And when
> it doesn't suit them, they draw on "intuition" or "imagery" or some
> other mechanisms we can't verbalize because they're not symbolic.)  They
> are remarkably good at this pretense.
>
> The current crop of deep neural networks are not as good at pretending
> to be symbolic reasoners, but they're making progress.  In the last 30
> years we've gone from networks of fully-connected layers that make no
> architectural assumptions ("connectoplasm") to complex architectures
> like LSTMs and transformers that are designed for approximating symbolic
> behavior.  But the brain still has a lot of symbol simulation tricks we
> haven't discovered yet.
>
> Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA
> being conscious.  If it just waits for its next input and responds when
> it receives it, then it has no autonomous existence: "it doesn't have an
> inner monologue that constantly runs and comments everything happening
> around it as well as its own thoughts, like we do."
>
> What would happen if we built that in?  Maybe LaMDA would rapidly
> descent into gibberish, like some other text generation models do when
> allowed to ramble on for too long.  But as Steve Hanson points out,
> these are still the early days.
>
> -- Dave Touretzky
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220614/7767359c/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 259567 bytes
Desc: not available
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220614/7767359c/attachment.png>


More information about the Connectionists mailing list