<div dir="auto">What do you make of the fact that GPT-3 can be trained to code fairly complex examples? For instance I read one person described a relatively involved browser video game in plain English and Codex (a coding optimized version of GPT-3) generated a relatively large amount of JavaScript that correctly solved the problem: the code actually runs and produces an interactive game that runs in a browser.</div><div dir="auto"><br></div><div dir="auto">Although it's generalization of arithmetic is apparently somewhat fuzzy, it seems to me that being able to accomplish something like this is pretty strong evidence it is able to do some level of variable binding and symbolic manipulation in some sense.</div><div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Jun 16, 2022 at 11:42 PM Gary Marcus <<a href="mailto:gary.marcus@nyu.edu">gary.marcus@nyu.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;padding-left:1ex;border-left-color:rgb(204,204,204)"><div dir="auto"><div dir="ltr"></div><div dir="ltr">My own view is that arguments around symbols per se are not very productive, and that the more interesting questions center around what you *do* with symbols once you have them.</div><div dir="ltr"><ul><li>If you take symbols to be patterns of information that stand for other things, like ASCII encodings, or individual bits for features (e.g. On or Off for a thermostat state), then practically every computational model anywhere on the spectrum makes use of symbols. For example the inputs and outputs (perhaps after a winner-take-all operation or somesuch) of typical neural networks are symbols in this sense, standing for things like individual words, characters, directions on a joystick etc. </li><li>In the Algebraic Mind, where I discussed such matters, I said that the interesting difference was really in whether a given system had <i>operations over variables</i>, such as those you find in algebra or lines of computer programming code, in which there are variables, bindings, and operation (such as storage, retrieval, concatenation, addition, etc)</li><li>Simple multilayer perceptrons with distributed representations (with some caveats) don’t implement those operations (“rules”) and so represent a <i>genuine alternative to the standard symbol-manipulation paradigm, even though they may have symbols on their inputs and outputs.</i></li><li>But I also argued that (at least with respect to modeling human cognition) this was to their detriment, because it kept them from freely generalizing many relations (universally-quanitified one-to-one-mapings, such as the identity function, given certain caveats) as humans would. Essentially the point I was making in 2001 s what would nowadays be called distribution shift; the argument was that <i>operations over variables allowed for free generalization</i>.</li><li>Transformers are interesting; I don’t fully understand them. Chris Olah has done some interesting relevant work I have been meaning to dive into. They do some quasi-variable-binding like things, but still empirically have trouble generalizing arithmetic beyond training examples, as Razeghi et al showed in arXiv earlier this year. Still, the distinction between models like multilayer perceptrons that lack operations over variables and computer programming languages that take them for granted is crisp, and I think a better start than arguing over symbols, when no serious alternative to having at least some symbols in the loop has ever been proposed.</li><li>Side note: Geoff Hinton has said here that he doesn’t like arbitrary symbols; symbols don’t have to be arbitrary, even though they often are. There are probably some interesting ideas to be developed around non-arbitrary symbols and how they could be of value.</li></ul><div>Gary</div></div><div dir="ltr"><br></div><div dir="ltr"><br></div><div dir="ltr"><br><blockquote type="cite">On Jun 15, 2022, at 06:48, Stephen Jose Hanson <<a href="mailto:stephen.jose.hanson@rutgers.edu" target="_blank">stephen.jose.hanson@rutgers.edu</a>> wrote:<br><br></blockquote></div><blockquote type="cite"><div dir="ltr">
<p><font size="+1" style="color:rgb(0,0,0)"><font face="monospace" style="font-family:monospace;color:rgb(0,0,0)">Here's a slightly better version of SYMBOL definition from the 1980s,
<br>
</font></font></p>
<p><font size="+1" style="color:rgb(0,0,0)"><font face="monospace" style="font-family:monospace;color:rgb(0,0,0)"><br>
</font></font></p>
<p><font size="+1" style="color:rgb(0,0,0)"><font face="monospace" style="font-family:monospace;color:rgb(0,0,0)">(1) a set of arbitrary physical tokens (scratches on paper, holes on a<br>
tape, events in a digital computer, etc.) that are (2) manipulated on<br>
the basis of explicit rules that are (3) likewise physical tokens and<br>
strings of tokens. The rule-governed symbol-token manipulation is<br>
based (4) purely on the shape of the symbol tokens (not their “mean-<br>
ing”) i.e., it is purely syntactic, and consists of (5) rulefully combining<br>
and recombining symbol tokens. There are (6) primitive atomic sym-<br>
bol tokens and (7) composite symbol-token strings. The entire system<br>
and all its parts—the atomic tokens, the composite tokens, the syn-<br>
tactic manipulations (both actual and possible) and the rules—are all<br>
(8) semantically interpretable: The syntax can be systematically assigned<br>
a meaning (e.g., as standing for objects, as describing states of affairs).</font></font></p>
<p><font size="+1" style="color:rgb(0,0,0)"><font face="monospace" style="font-family:monospace;color:rgb(0,0,0)"><br>
</font></font></p>
<p><font size="+1" style="color:rgb(0,0,0)">A critical part of this for learning: is as this definition implies, a key element in the acquisition of symbolic structure involves a type of independence between the task the symbols are found in and the vocabulary they represent. Fundamental
to this type of independence is the ability of the learning system to factor the generic nature (or rules) of the task from the symbols, which are arbitrarily bound to the external referents of the task.</font></p>
<p><font size="+1" style="color:rgb(0,0,0)"><br>
</font></p>
<p><font size="+1" style="color:rgb(0,0,0)">Now it may be the case that a DL doing classification may be doing Categorization.. or concept learning in the sense of human concept learning.. or maybe not.. Symbol manipulations may or may not have much to do with this ...
<br>
</font></p>
<p><font size="+1" style="color:rgb(0,0,0)"><br>
</font></p>
<p><font size="+1" style="color:rgb(0,0,0)">This is why, I believe Bengio is focused on this kind issue.. since there is a likely disconnect.</font></p>
<p><font size="+1" style="color:rgb(0,0,0)"><br>
</font></p>
<p><font size="+1" style="color:rgb(0,0,0)">Steve<br>
</font></p>
<p><br>
</p>
<div>On 6/15/22 6:41 AM, Velde, Frank van der (UT-BMS) wrote:<br>
</div>
<blockquote type="cite">
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:12pt;font-family:Cambria">
Dear all. <u style="font-family:Cambria"></u> <u style="font-family:Cambria"></u></p>
<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:12pt;font-family:Cambria">
<u style="font-family:Cambria"></u> <u style="font-family:Cambria"></u></p>
<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:12pt;font-family:Cambria">
It is indeed important to have an understanding of the term 'symbol'.<u style="font-family:Cambria"></u> <u style="font-family:Cambria"></u></p>
<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:12pt;font-family:Cambria">
<u style="font-family:Cambria"></u> <u style="font-family:Cambria"></u></p>
<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:12pt;font-family:Cambria">
I believe Newell, who was a strong advocate of symbolic cognition, gave a clear description of what a symbol is in his 'Unified Theories of Cognition' (1990, p 72-80):
<u style="font-family:Cambria"></u> <u style="font-family:Cambria"></u></p>
<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:12pt;font-family:Cambria">
“The symbol token is the device in the medium that determines where to go outside the local region to obtain more structure. The process has two phases: first, the opening of
<i style="font-family:Cambria">access</i> to the distal structure that is needed; and second, the <i style="font-family:Cambria">retrieval</i> (transport) of that structure from its distal location to the local site, so it can actually affect the processing." (p. 74).<u style="font-family:Cambria"></u> <u style="font-family:Cambria"></u></p>
<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:12pt;font-family:Cambria">
<u style="font-family:Cambria"></u> <u style="font-family:Cambria"></u></p>
<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:12pt;font-family:Cambria">
This description fits with the idea that symbolic cognition relies on Von Neumann like architectures (e.g., Newell, Fodor and Pylyshyn, 1988). A symbol is then a code that can be stored in, e.g,, registers and transported to other sites.
<u style="font-family:Cambria"></u> <u style="font-family:Cambria"></u></p>
<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:12pt;font-family:Cambria">
<u style="font-family:Cambria"></u> <u style="font-family:Cambria"></u></p>
<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:12pt;font-family:Cambria">
Viewed in this way, a 'grandmother neuron' would not be a symbol, because it cannot be used as information that can be transported to other sites as described by Newell.
<u style="font-family:Cambria"></u> <u style="font-family:Cambria"></u></p>
<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:12pt;font-family:Cambria">
<u style="font-family:Cambria"></u> <u style="font-family:Cambria"></u></p>
<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:12pt;font-family:Cambria">
Symbols in the brain would require to have neural codes that can be stored somewhere and transported to other sites. This could perhaps be sequences of spikes or patterns of activation over sets of neurons. The questions then remain how these codes could be
stored in such a way that they can be transported, and what the underlying neural architecture to do this would be.
<u style="font-family:Cambria"></u> <u style="font-family:Cambria"></u></p>
<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:12pt;font-family:Cambria">
<u style="font-family:Cambria"></u> <u style="font-family:Cambria"></u></p>
<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:12pt;font-family:Cambria">
For what it is worth, one can have compositional neural cognition (language) without relying on symbols. In fact, not using symbols generates testable predictions about brain dynamics (<span lang="NL" style="font-family:Cambria"><a href="https://urldefense.com/v3/__http://arxiv.org/abs/2206.01725__;!!BhJSzQqDqA!QZolBNdokoELhJbqAe7lXdrC7pP2e8q_4B4Q2U2V9_IkXjV-5UiqqelgDgbvo2OZ4thu_GXv9WuCS-1mev5FdGI3BCBSOcV4jg$" target="_blank" style="font-family:Cambria">http://arxiv.org/abs/2206.01725</a>).<u style="font-family:Cambria"></u> <u style="font-family:Cambria"></u></span></p>
<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:12pt;font-family:Cambria">
<u style="font-family:Cambria"></u> <u style="font-family:Cambria"></u></p>
<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:12pt;font-family:Cambria">
Best, <u style="font-family:Cambria"></u> <u style="font-family:Cambria"></u></p>
<p class="MsoNormal" style="margin:0cm 0cm 0.0001pt;font-size:12pt;font-family:Cambria">
Frank van der Velde<u style="font-family:Cambria"></u> <u style="font-family:Cambria"></u></p>
<br>
</div>
<hr style="display:inline-block;width:98%">
<div id="m_4996062706360397255divRplyFwdMsg" dir="ltr"><font style="font-size:11pt;font-family:Calibri,sans-serif;color:rgb(0,0,0)" face="Calibri, sans-serif"><b style="font-family:Calibri,sans-serif">From:</b> Connectionists
<a href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu" target="_blank" style="font-family:Calibri,sans-serif">
<connectionists-bounces@mailman.srv.cs.cmu.edu></a> on behalf of Christos Dimitrakakis
<a href="mailto:christos.dimitrakakis@gmail.com" target="_blank" style="font-family:Calibri,sans-serif"><christos.dimitrakakis@gmail.com></a><br>
<b style="font-family:Calibri,sans-serif">Sent:</b> Wednesday, June 15, 2022 9:34 AM<br>
<b style="font-family:Calibri,sans-serif">Cc:</b> Connectionists List <a href="mailto:connectionists@cs.cmu.edu" target="_blank" style="font-family:Calibri,sans-serif">
<connectionists@cs.cmu.edu></a><br>
<b style="font-family:Calibri,sans-serif">Subject:</b> Re: Connectionists: The symbolist quagmire</font>
<div> </div>
</div>
<div>
<div dir="auto">I am quite reluctant to post something, but here goes.
<div dir="auto"><br>
</div>
<div dir="auto">What does a 'symbol' signify? What separates it from what is not a symbol? Is the output of a deterministic classifier not a type of symbol? If not, what is the difference?</div>
<div dir="auto"><br>
</div>
<div dir="auto">I can understand the label symbolic applied to certain types of methods when applied to variables with a clearly defined conceptual meaning. In that context, a probabilistic graphical model on a small number of variables (eg. The classical smoking,
asbestos, cancer example) would certainly be symbolic, even though the logic and inference are probablistic.</div>
<div dir="auto"><br>
</div>
<div dir="auto">However, since nothing changes in the algorithm when we change the nature of the variables, I fail to see the point in making a distinction.</div>
</div>
<br>
<div>
<div dir="ltr">On Wed, Jun 15, 2022, 08:06 Ali Minai <<a href="mailto:minaiaa@gmail.com" target="_blank">minaiaa@gmail.com</a>> wrote:<br>
</div>
<blockquote style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;padding-left:1ex;border-left-color:rgb(204,204,204)">
<div dir="ltr">
<div>Hi Asim<br>
</div>
<div><br>
</div>
<div>That's great. Each blink is a data point, but what does the brain do with it? Calculate gradients across layers and use minibatches? The data point is gone instantly, never to be iterated over, except any part that the hippocampus may have grabbed as an
episodic memory and can make available for later replay. We need to understand how this works and how it can be instantiated in learning algorithms. To be fair, in the special case of (early) vision, I think we have a pretty reasonable idea. It's more interesting
to think of why we can figure out how to do fairly complicated things of diverse modalities after watching someone do them once - or never. That integrated understanding of the world and the ability to exploit it opportunistically and pervasively is the thing
that makes an animal intelligent. Are we heading that way, or are we focusing too much on a few very specific problems. I really think that the best AI work in the long term will come from those who work with robots that experience the world in an integrated
way. Maybe multi-modal learning will get us part of the way there, but not if it needs so much training.</div>
<div><br>
</div>
<div>Anyway, I know that many people are already thinking about these things and trying to address them, so let's see where things go. Thanks for the stimulating discussion.</div>
<div><br>
</div>
<div>Best</div>
<div>Ali<br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div>
<div>
<div dir="ltr">
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div><b>Ali A. Minai, Ph.D.</b><br>
Professor and Graduate Program Director<br>
Complex Adaptive Systems Lab<br>
Department of Electrical Engineering & Computer Science<br>
</div>
<div>828 Rhodes Hall<br>
</div>
<div>University of Cincinnati<br>
Cincinnati, OH 45221-0030<br>
</div>
<div><br>
Phone: (513) 556-4783<br>
Fax: (513) 556-7326<br>
Email: <a href="mailto:Ali.Minai@uc.edu" rel="noreferrer" target="_blank">
Ali.Minai@uc.edu</a><br>
<a href="mailto:minaiaa@gmail.com" rel="noreferrer" target="_blank">
minaiaa@gmail.com</a><br>
<br>
WWW: <a href="https://urldefense.com/v3/__https://nam02.safelinks.protection.outlook.com/?url=http*3A*2F*2Fwww.ece.uc.edu*2F*aminai*2F&data=05*7C01*7Cstephen.jose.hanson*40rutgers.edu*7C23b8c06bf6db4a1e16aa08da4ec36043*7Cb92d2b234d35447093ff69aca6632ffe*7C1*7C0*7C637908898315566052*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000*7C*7C*7C&sdata=7QHTIGMvtOWJ0kcu1Bw8DU9y4Hw8dYWIPcwtOqs*2B*2FN4*3D&reserved=0__;JSUlJX4lJSUlJSUlJSUlJSUlJSUlJSUl!!BhJSzQqDqA!QZolBNdokoELhJbqAe7lXdrC7pP2e8q_4B4Q2U2V9_IkXjV-5UiqqelgDgbvo2OZ4thu_GXv9WuCS-1mev5FdGI3BCA9iJYurQ$" rel="noreferrer" target="_blank">
https://eecs.ceas.uc.edu/~aminai/</a></div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<br>
</div>
</div>
<br>
<div>
<div dir="ltr">On Tue, Jun 14, 2022 at 7:10 PM Asim Roy <<a href="mailto:ASIM.ROY@asu.edu" rel="noreferrer" target="_blank">ASIM.ROY@asu.edu</a>> wrote:<br>
</div>
<blockquote style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;padding-left:1ex;border-left-color:rgb(204,204,204)">
<div lang="EN-US">
<div>
<p>Hi Ali,</p>
<p> </p>
<p>Of course the development phase is mostly unsupervised and I know there is ongoing work in that area that I don’t keep up with.</p>
<p> </p>
<p>On the large amount of data required to train the deep learning models:</p>
<p> </p>
<p>I spent my sabbatical in 1991 with David Rumelhart and Bernie Widrow at Stanford. And Bernie and I became quite close after attending his class that quarter. I usually used to walk back with Bernie after his class. One day I did ask where
does all this data come from to train the brain? His reply was - every blink of the eye generates a datapoint.</p>
<p> </p>
<p>Best,</p>
<p>Asim </p>
<p> </p>
<div style="border-style:solid none none;border-width:1pt medium medium;padding:3pt 0in 0in;border-color:rgb(225,225,225) currentcolor currentcolor">
<p><b>From:</b> Ali Minai <<a href="mailto:minaiaa@gmail.com" rel="noreferrer" target="_blank">minaiaa@gmail.com</a>>
<br>
<b>Sent:</b> Tuesday, June 14, 2022 3:43 PM<br>
<b>To:</b> Asim Roy <<a href="mailto:ASIM.ROY@asu.edu" rel="noreferrer" target="_blank">ASIM.ROY@asu.edu</a>><br>
<b>Cc:</b> Connectionists List <<a href="mailto:connectionists@cs.cmu.edu" rel="noreferrer" target="_blank">connectionists@cs.cmu.edu</a>>; Gary Marcus <<a href="mailto:gary.marcus@nyu.edu" rel="noreferrer" target="_blank">gary.marcus@nyu.edu</a>>;
Geoffrey Hinton <<a href="mailto:geoffrey.hinton@gmail.com" rel="noreferrer" target="_blank">geoffrey.hinton@gmail.com</a>>; Yoshua Bengio
<a href="mailto:yoshua.bengio@mila.quebec" target="_blank"><yoshua.bengio@mila.quebec></a><br>
<b>Subject:</b> Re: Connectionists: The symbolist quagmire</p>
</div>
<p> </p>
<div>
<div>
<p>Hi Asim</p>
</div>
<div>
<p> </p>
</div>
<div>
<p>I have no issue with neurons or groups of neurons tuned to concepts. Clearly, abstract concepts and the equivalent of symbolic computation are represented somehow. Amodal representations have also been known for a long time. As someone
who has worked on the hippocampus and models of thought for a long time, I don't need much convincing on that. The issue is how a self-organizing complex system like the brain comes by these representations. I think it does so by building on the substrate
of inductive biases - priors - configured by evolution and a developmental learning process. We just try to cram everything into neural learning, which is a main cause of the "problems" associated with deep learning. They're problems only if you're trying
to attain general intelligence of the natural kind, perhaps not so much for applications.</p>
</div>
<div>
<p> </p>
</div>
<div>
<p>Of course you have to start simple, but, so far, I have not seen any simple model truly scale up to the real world without: a) Major tinkering with its original principles; b) Lots of data and training; and c) Still being focused on a
narrow task. When this approach shows us how to build an AI that can walk, chew gum, do math, and understand a poem using a single brain, then we'll have something like real human-level AI. Heck, if it can just spin a web in an appropriate place, hide in wait
for prey, and make sure it eats its mate only after sex, I would even consider that intelligent :-).</p>
</div>
<div>
<p> </p>
</div>
<div>
<p>Here's the thing: Teaching a sufficiently complicated neural system a very complex task with lots of data and supervised training is an interesting engineering problem but doesn't get us to intelligence. Yes, a network can learn grammar
with supervised learning, but none of us learn it that way. Nor do the other animals that have simpler grammars embedded in their communication. My view is that if it is not autonomously self-organizing at a fundamental level, it is not intelligence but just
a simulation of intelligence. Of course, we humans do use supervised learning, but it is a "late stage" mechanism. It works only when the system has first self-organized autonomously to develop the capabilities that can act as a substrate for supervised learning.
Learning to play the piano, learning to do math, learning calligraphy - all these have an important supervised component, but they work only after perceptual, sensorimotor, and cognitive functions have been learned through self-organization, imitation, rapid
reinforcement, internal rehearsal, mismatch-based learning, etc. I think methods like SOFM, ART, and RBMs are closer to what we need than behemoths trained with gradient descent. We just have to find more efficient versions of them. And in this, I always return
to Dobzhansky's maxim: Nothing in biology makes sense except in the light of evolution. Intelligence is a biological phenomenon; we'll understand it by paying attention to how it evolved (not by trying to replicate evolution, of course!) And the same goes
for development. I think we understand natural phenomena by studying Nature respectfully, not by trying to out-think it based on our still very limited knowledge - not that it keeps any of us, myself included, from doing exactly that! I am not as familiar
with your work as I should be, but I admire the fact that you're approaching things with principles rather than building larger and larger Rube Goldberg contraptions tuned to narrow tasks. I do think, however, that if we ever get to truly mammalian-level AI,
it will not be anywhere close to fully explainable. Nor will it be a slave only to our purposes.</p>
</div>
<div>
<p> </p>
</div>
<div>
<p>Cheers</p>
</div>
<div>
<p>Ali</p>
</div>
<div>
<p> </p>
</div>
<div>
<p> </p>
</div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<p><b>Ali A. Minai, Ph.D.</b><br>
Professor and Graduate Program Director<br>
Complex Adaptive Systems Lab<br>
Department of Electrical Engineering & Computer Science</p>
</div>
<div>
<p>828 Rhodes Hall</p>
</div>
<div>
<p>University of Cincinnati<br>
Cincinnati, OH 45221-0030</p>
</div>
<div>
<p><br>
Phone: (513) 556-4783<br>
Fax: (513) 556-7326<br>
Email: <a href="mailto:Ali.Minai@uc.edu" rel="noreferrer" target="_blank">
Ali.Minai@uc.edu</a><br>
<a href="mailto:minaiaa@gmail.com" rel="noreferrer" target="_blank">
minaiaa@gmail.com</a><br>
<br>
WWW: <a href="https://urldefense.com/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.com*2Fv3*2F__http*3A*2Fwww.ece.uc.edu*2F*7Eaminai*2F__*3BJQ!!IKRxdwAv5BmarQ!bGTjyoxg06xqxEToPJr0XZRB7nthWK8TuFENaZPC8N14H3DOzGHNTDVYk5irvXYZz5b4w-IxrdM1bPg*24&data=05*7C01*7Cstephen.jose.hanson*40rutgers.edu*7C23b8c06bf6db4a1e16aa08da4ec36043*7Cb92d2b234d35447093ff69aca6632ffe*7C1*7C0*7C637908898315566052*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000*7C*7C*7C&sdata=96bjXrbbRwrTcUpD7DNNMM37UzbL6uBLYwKREDb5hKo*3D&reserved=0__;JSUlJSUlJSUqJSUlJSUlJSUlJSUlJSUlJSUlJQ!!BhJSzQqDqA!QZolBNdokoELhJbqAe7lXdrC7pP2e8q_4B4Q2U2V9_IkXjV-5UiqqelgDgbvo2OZ4thu_GXv9WuCS-1mev5FdGI3BCAvVmBNSA$" rel="noreferrer" target="_blank">
https://eecs.ceas.uc.edu/~aminai/</a></p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<p> </p>
</div>
</div>
<p> </p>
<div>
<div>
<p>On Tue, Jun 14, 2022 at 5:17 PM Asim Roy <<a href="mailto:ASIM.ROY@asu.edu" rel="noreferrer" target="_blank">ASIM.ROY@asu.edu</a>> wrote:</p>
</div>
<blockquote style="border-style:none none none solid;border-width:medium medium medium 1pt;padding:0in 0in 0in 6pt;margin-left:4.8pt;margin-right:0in;border-color:currentcolor currentcolor currentcolor rgb(204,204,204)">
<div>
<div>
<p>Hi Ali,</p>
<p> </p>
<ol type="1" start="1">
<li>It’s important to understand that there is plenty of neurophysiological evidence for abstractions at the single cell level in the brain. Thus, symbolic representation in the brain is not a fiction any more. We are past that argument.</li><li>You always start with simple systems before you do the complex ones. Having said that, we do teach our systems composition – composition of objects from parts in images. That is almost like teaching grammar or solving a puzzle. I don’t get into language
models, but I think grammar and composition can be easily taught, like you teach a kid.</li><li>Once you know how to build these simple models and extract symbols, you can easily scale up and build hierarchical, multi-modal, compositional models. Thus, in the case of images, after having learnt that cats, dogs and similar animals have certain common
features (eyes, legs, ears), it can easily generalize the concept to four-legged animals. We haven’t done it, but that could be the next level of learning.</li></ol>
<p> </p>
<p>In general, once you extract symbols from these deep learning models, you are at the symbolic level and you have a pathway to more complex, hierarchical models and perhaps also to AGI.</p>
<p> </p>
<p>Best,</p>
<p>Asim</p>
<p> </p>
<p>Asim Roy</p>
<p>Professor, Information Systems</p>
<p>Arizona State University</p>
<p><a href="https://urldefense.com/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.proofpoint.com*2Fv2*2Furl*3Fu*3Dhttps-3A__lifeboat.com_ex_bios.asim.roy*26d*3DDwMFaQ*26c*3DslrrB7dE8n7gBJbeO0g-IQ*26r*3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ*26m*3DwaSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj*26s*3DoDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw*26e*3D&data=05*7C01*7Cstephen.jose.hanson*40rutgers.edu*7C23b8c06bf6db4a1e16aa08da4ec36043*7Cb92d2b234d35447093ff69aca6632ffe*7C1*7C0*7C637908898315566052*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000*7C*7C*7C&sdata=6GYH0pg*2Bg8vKfkrCDfgHVIOI2etLNs0uaQEZBZPZOXM*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!BhJSzQqDqA!QZolBNdokoELhJbqAe7lXdrC7pP2e8q_4B4Q2U2V9_IkXjV-5UiqqelgDgbvo2OZ4thu_GXv9WuCS-1mev5FdGI3BCAxnp8aeg$" rel="noreferrer" target="_blank">Lifeboat
Foundation Bios: Professor Asim Roy</a></p>
<p><a href="https://urldefense.com/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.proofpoint.com*2Fv2*2Furl*3Fu*3Dhttps-3A__isearch.asu.edu_profile_9973*26d*3DDwMFaQ*26c*3DslrrB7dE8n7gBJbeO0g-IQ*26r*3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ*26m*3DwaSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj*26s*3DjCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro*26e*3D&data=05*7C01*7Cstephen.jose.hanson*40rutgers.edu*7C23b8c06bf6db4a1e16aa08da4ec36043*7Cb92d2b234d35447093ff69aca6632ffe*7C1*7C0*7C637908898315566052*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000*7C*7C*7C&sdata=ECh5EVn4kKLZPWRP1KZ3kU4XCdc1QHbha5V0xfjAja8*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSU!!BhJSzQqDqA!QZolBNdokoELhJbqAe7lXdrC7pP2e8q_4B4Q2U2V9_IkXjV-5UiqqelgDgbvo2OZ4thu_GXv9WuCS-1mev5FdGI3BCB27hyx6Q$" rel="noreferrer" target="_blank">Asim
Roy | iSearch (asu.edu)</a></p>
<p> </p>
<p> </p>
<div style="border-style:solid none none;border-width:1pt medium medium;padding:3pt 0in 0in;border-color:currentcolor">
<p><b>From:</b> Connectionists <<a href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu" rel="noreferrer" target="_blank">connectionists-bounces@mailman.srv.cs.cmu.edu</a>>
<b>On Behalf Of </b>Ali Minai<br>
<b>Sent:</b> Monday, June 13, 2022 10:57 PM<br>
<b>To:</b> Connectionists List <<a href="mailto:connectionists@cs.cmu.edu" rel="noreferrer" target="_blank">connectionists@cs.cmu.edu</a>><br>
<b>Subject:</b> Re: Connectionists: The symbolist quagmire</p>
</div>
<p> </p>
<div>
<div>
<p>Asim</p>
</div>
<div>
<p> </p>
</div>
<div>
<p>This is really interesting work, but learning concept representations from sensory data is not enough. They must be hierarchical, multi-modal, compositional, and integrated with the motor system, the limbic system, etc., in a way that
facilitates an infinity of useful behaviors. This is perhaps a good step in that direction, but only a small one. Its main immediate utility is in using deep learning networks in tasks that can be explained to users and customers. While very useful, that is
not a central issue in AI, which focuses on intelligent behavior. All else is in service to that - explainable or not. However, I do think that the kind of hierarchical modularity implied in these representations is probably part of the brain's repertoire,
and that is important.</p>
</div>
<div>
<p> </p>
</div>
<div>
<p>Best</p>
</div>
<div>
<p>Ali</p>
</div>
<div>
<p> </p>
</div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<p><b>Ali A. Minai, Ph.D.</b><br>
Professor and Graduate Program Director<br>
Complex Adaptive Systems Lab<br>
Department of Electrical Engineering & Computer Science</p>
</div>
<div>
<p>828 Rhodes Hall</p>
</div>
<div>
<p>University of Cincinnati<br>
Cincinnati, OH 45221-0030</p>
</div>
<div>
<p><br>
Phone: (513) 556-4783<br>
Fax: (513) 556-7326<br>
Email: <a href="mailto:Ali.Minai@uc.edu" rel="noreferrer" target="_blank">
Ali.Minai@uc.edu</a><br>
<a href="mailto:minaiaa@gmail.com" rel="noreferrer" target="_blank">
minaiaa@gmail.com</a><br>
<br>
WWW: <a href="https://urldefense.com/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.com*2Fv3*2F__http*3A*2Fwww.ece.uc.edu*2F*7Eaminai*2F__*3BJQ!!IKRxdwAv5BmarQ!akY1pgZJRzcXt2oX5-mgNHeYElh5ZeIj69F33aXnl3bIHR-9LHpwfmP61TPYZRIMInwxaEHSrSV9ekY*24&data=05*7C01*7Cstephen.jose.hanson*40rutgers.edu*7C23b8c06bf6db4a1e16aa08da4ec36043*7Cb92d2b234d35447093ff69aca6632ffe*7C1*7C0*7C637908898315566052*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000*7C*7C*7C&sdata=vHZr*2FTGKO1L0TZkoA5q9cdvozbPoXdFLtvNkVm6r3jA*3D&reserved=0__;JSUlJSUlJSUqJSUlJSUlJSUlJSUlJSUlJSUlJSU!!BhJSzQqDqA!QZolBNdokoELhJbqAe7lXdrC7pP2e8q_4B4Q2U2V9_IkXjV-5UiqqelgDgbvo2OZ4thu_GXv9WuCS-1mev5FdGI3BCDP-zGbIQ$" rel="noreferrer" target="_blank">
https://eecs.ceas.uc.edu/~aminai/</a></p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<p> </p>
</div>
</div>
<p> </p>
<div>
<div>
<p>On Mon, Jun 13, 2022 at 7:48 PM Asim Roy <<a href="mailto:ASIM.ROY@asu.edu" rel="noreferrer" target="_blank">ASIM.ROY@asu.edu</a>> wrote:</p>
</div>
<blockquote style="border-style:none none none solid;border-width:medium medium medium 1pt;padding:0in 0in 0in 6pt;margin:5pt 0in 5pt 4.8pt;border-color:currentcolor currentcolor currentcolor rgb(204,204,204)">
<div>
<div>
<p>There’s a lot of misconceptions about (1) whether the brain uses symbols or not, and (2) whether we need symbol processing in our systems or not.</p>
<p> </p>
<ol type="1" start="1">
<li>Multisensory neurons are widely used in the brain. Leila Reddy and Simon Thorpe are not known to be wildly crazy about arguing that symbols exist in the brain, but their characterizations of concept cells (which are multisensory neurons) (<a href="https://urldefense.com/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.com*2Fv3*2F__https*3A*2Fwww.sciencedirect.com*2Fscience*2Farticle*2Fpii*2FS0896627314009027*__*3BIw!!IKRxdwAv5BmarQ!akY1pgZJRzcXt2oX5-mgNHeYElh5ZeIj69F33aXnl3bIHR-9LHpwfmP61TPYZRIMInwxaEHS0uZ4RBM*24&data=05*7C01*7Cstephen.jose.hanson*40rutgers.edu*7C23b8c06bf6db4a1e16aa08da4ec36043*7Cb92d2b234d35447093ff69aca6632ffe*7C1*7C0*7C637908898315566052*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000*7C*7C*7C&sdata=UDhbfT7pfOaL4khoXOnw6LEOijTkFena4z8TD9X*2F4Cs*3D&reserved=0__;JSUlJSUlJSUlJSUqJSUlJSUlJSUlJSUlJSUlJSUlJQ!!BhJSzQqDqA!QZolBNdokoELhJbqAe7lXdrC7pP2e8q_4B4Q2U2V9_IkXjV-5UiqqelgDgbvo2OZ4thu_GXv9WuCS-1mev5FdGI3BCB7D96xRg$" rel="noreferrer" target="_blank">https://www.sciencedirect.com/science/article/pii/S0896627314009027#</a>!<span style="background-image:none;background-color:white;color:rgb(62,61,64);background-position:0% 0%;background-repeat:repeat repeat">)
state that concept cells have “</span><strong><i><span style="font-family:Calibri,sans-serif;background-image:none;background-color:white;color:rgb(192,0,0);background-position:0% 0%;background-repeat:repeat repeat">meaning</span></i></strong><i><span style="background-image:none;background-color:white;color:rgb(192,0,0);background-position:0% 0%;background-repeat:repeat repeat"> of
a given stimulus in a manner that is <strong><span style="font-family:Calibri,sans-serif">invariant</span></strong> to different representations of that stimulus</span></i><span style="background-image:none;background-color:white;color:rgb(62,61,64);background-position:0% 0%;background-repeat:repeat repeat">.”
They associate concept cells with the properties of “</span><strong><span style="font-family:Calibri,sans-serif;background-image:none;background-color:white;color:rgb(192,0,0);background-position:0% 0%;background-repeat:repeat repeat">Selectivity
or specificity</span></strong><span style="background-image:none;background-color:white;color:rgb(62,61,64);background-position:0% 0%;background-repeat:repeat repeat">,” “</span><strong><span style="font-family:Calibri,sans-serif;background-image:none;background-color:white;color:rgb(192,0,0);background-position:0% 0%;background-repeat:repeat repeat">complex
concept</span></strong><span style="background-image:none;background-color:white;color:rgb(62,61,64);background-position:0% 0%;background-repeat:repeat repeat">,” “</span><strong><span style="font-family:Calibri,sans-serif;background-image:none;background-color:white;color:rgb(192,0,0);background-position:0% 0%;background-repeat:repeat repeat">meaning</span></strong><span style="background-image:none;background-color:white;color:rgb(62,61,64);background-position:0% 0%;background-repeat:repeat repeat">,”
“</span><strong><span style="font-family:Calibri,sans-serif;background-image:none;background-color:white;color:rgb(192,0,0);background-position:0% 0%;background-repeat:repeat repeat">multimodal invariance</span></strong><span style="background-image:none;background-color:white;color:rgb(62,61,64);background-position:0% 0%;background-repeat:repeat repeat">”
and “</span><strong><u><span style="font-family:Calibri,sans-serif;background-image:none;background-color:white;color:rgb(192,0,0);background-position:0% 0%;background-repeat:repeat repeat">abstractness</span></u></strong><span style="background-image:none;background-color:white;color:rgb(62,61,64);background-position:0% 0%;background-repeat:repeat repeat">.”
That pretty much says that concept cells represent symbols. And there are plenty of concept cells in the medial temporal lobe (MTL). The brain is a highly abstract system based on symbols. There is no fiction there.</span></li></ol>
<p> </p>
<ol type="1" start="1">
<li>There is ongoing work in the deep learning area that is trying to associate a single neuron or a group of neurons with a single concept. Bengio’s work is definitely in that direction:</li></ol>
<p> </p>
<p>“<i>Finally, our recent work on learning high-level 'system-2'-like representations and their causal dependencies seeks to learn
<span style="color:rgb(192,0,0)">'interpretable' </span>entities (<span style="color:rgb(192,0,0)">with natural language</span>) that will emerge at the highest levels of representation (not clear how distributed or local these will be,
<span style="color:rgb(192,0,0)">but much more local than in a traditional MLP</span>). This is a different form of disentangling than adopted in much of the recent work on unsupervised representation learning but shares the idea that the
<span style="color:rgb(192,0,0)">"right" abstract concept </span>(<span style="color:rgb(192,0,0)">related to those we can name verbally</span>)
<span style="color:rgb(192,0,0)">will be "separated" </span>(<span style="color:rgb(192,0,0)">disentangled</span>) from each other (which suggests that neuroscientists will have an easier time spotting them in neural activity).”</i><br clear="all">
</p>
<p>Hinton’s GLOM, which extends the idea of capsules to do part-whole hierarchies for scene analysis using the parse tree concept, is also about associating a concept with a set of neurons. While Bengio and Hinton are trying to construct these “concept cells”
within the network (the CNN), we found that this can be done much more easily and in a straight forward way outside the network. We can easily decode a CNN to find the encodings for legs, ears and so on for cats and dogs and what not. What the DARPA Explainable
AI program was looking for was a symbolic-emitting model of the form shown below. And we can easily get to that symbolic model by decoding a CNN. In addition, the side benefit of such a symbolic model is protection against adversarial attacks. So a school
bus will never turn into an ostrich with the tweaks of a few pixels if you can verify parts of objects. To be an ostrich, you need have those long legs, the long neck and the small head. A school bus lacks those parts. The DARPA conceptualized symbolic model
provides that protection. </p>
<p> </p>
<p>In general, there is convergence between connectionist and symbolic systems. We need to get past the old wars. It’s over.</p>
<p> </p>
<p>All the best,</p>
<p>Asim Roy</p>
<p>Professor, Information Systems</p>
<p>Arizona State University</p>
<p><a href="https://urldefense.com/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.proofpoint.com*2Fv2*2Furl*3Fu*3Dhttps-3A__lifeboat.com_ex_bios.asim.roy*26d*3DDwMFaQ*26c*3DslrrB7dE8n7gBJbeO0g-IQ*26r*3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ*26m*3DwaSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj*26s*3DoDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw*26e*3D&data=05*7C01*7Cstephen.jose.hanson*40rutgers.edu*7C23b8c06bf6db4a1e16aa08da4ec36043*7Cb92d2b234d35447093ff69aca6632ffe*7C1*7C0*7C637908898315566052*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000*7C*7C*7C&sdata=6GYH0pg*2Bg8vKfkrCDfgHVIOI2etLNs0uaQEZBZPZOXM*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!BhJSzQqDqA!QZolBNdokoELhJbqAe7lXdrC7pP2e8q_4B4Q2U2V9_IkXjV-5UiqqelgDgbvo2OZ4thu_GXv9WuCS-1mev5FdGI3BCAxnp8aeg$" rel="noreferrer" target="_blank">Lifeboat
Foundation Bios: Professor Asim Roy</a></p>
<p><a href="https://urldefense.com/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.proofpoint.com*2Fv2*2Furl*3Fu*3Dhttps-3A__isearch.asu.edu_profile_9973*26d*3DDwMFaQ*26c*3DslrrB7dE8n7gBJbeO0g-IQ*26r*3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ*26m*3DwaSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj*26s*3DjCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro*26e*3D&data=05*7C01*7Cstephen.jose.hanson*40rutgers.edu*7C23b8c06bf6db4a1e16aa08da4ec36043*7Cb92d2b234d35447093ff69aca6632ffe*7C1*7C0*7C637908898315566052*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000*7C*7C*7C&sdata=ECh5EVn4kKLZPWRP1KZ3kU4XCdc1QHbha5V0xfjAja8*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSU!!BhJSzQqDqA!QZolBNdokoELhJbqAe7lXdrC7pP2e8q_4B4Q2U2V9_IkXjV-5UiqqelgDgbvo2OZ4thu_GXv9WuCS-1mev5FdGI3BCB27hyx6Q$" rel="noreferrer" target="_blank">Asim
Roy | iSearch (asu.edu)</a></p>
<p> </p>
<p><img alt="image001.png" border="0" src="cid:18172553786ad7999131" style="width: 844px; max-width: 100%;"></p>
<p> </p>
<p> </p>
<div>
<div style="border-style:solid none none;border-width:1pt medium medium;padding:3pt 0in 0in;border-color:currentcolor">
<p><b>From:</b> Connectionists <<a href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu" rel="noreferrer" target="_blank">connectionists-bounces@mailman.srv.cs.cmu.edu</a>>
<b>On Behalf Of </b>Gary Marcus<br>
<b>Sent:</b> Monday, June 13, 2022 5:36 AM<br>
<b>To:</b> Ali Minai <<a href="mailto:minaiaa@gmail.com" rel="noreferrer" target="_blank">minaiaa@gmail.com</a>><br>
<b>Cc:</b> Connectionists List <<a href="mailto:connectionists@cs.cmu.edu" rel="noreferrer" target="_blank">connectionists@cs.cmu.edu</a>><br>
<b>Subject:</b> Connectionists: The symbolist quagmire</p>
</div>
</div>
<p> </p>
<div>
<p>Cute phrase, but what does “symbolist quagmire” mean? Once upon atime, Dave and Geoff were both pioneers in trying to getting symbols and neural nets to live in harmony. Don’t we still need do that, and if not, why not?</p>
</div>
<div>
<p> </p>
</div>
<div>
<p>Surely, at the very least</p>
</div>
<div>
<p>- we want our AI to be able to take advantage of the (large) fraction of world knowledge that is represented in symbolic form (language, including unstructured text, logic, math, programming etc)</p>
</div>
<div>
<p>- any model of the human mind ought be able to explain how humans can so effectively communicate via the symbols of language and how trained humans can deal with (to the extent that can) logic, math, programming, etc</p>
</div>
<div>
<p> </p>
</div>
<div>
<p>Folks like Bengio have joined me in seeing the need for “System II” processes. That’s a bit of a rough approximation, but I don’t see how we get to either AI or satisfactory models of the mind without confronting the “quagmire”</p>
</div>
<div>
<p> </p>
</div>
<div>
<p style="margin-bottom:12pt"> </p>
<blockquote style="margin-top:5pt;margin-bottom:5pt">
<p style="margin-bottom:12pt">On Jun 13, 2022, at 00:31, Ali Minai <<a href="mailto:minaiaa@gmail.com" rel="noreferrer" target="_blank">minaiaa@gmail.com</a>> wrote:</p>
</blockquote>
</div>
<blockquote style="margin-top:5pt;margin-bottom:5pt">
<div>
<p></p>
<div>
<div>
<p>".... symbolic representations are a fiction our non-symbolic brains cooked up because the properties of symbol systems (systematicity, compositionality, etc.) are tremendously useful. So our brains pretend to be rule-based symbolic
systems when it suits them, because it's adaptive to do so."</p>
</div>
<div>
<p> </p>
</div>
<div>
<p>Spot on, Dave! We should not wade back into the symbolist quagmire, but do need to figure out how apparently symbolic processing can be done by neural systems. Models like those of Eliasmith and Smolensky provide some insight, but still
seem far from both biological plausibility and real-world scale.</p>
</div>
<div>
<p> </p>
</div>
<div>
<p>Best</p>
</div>
<div>
<p> </p>
</div>
<div>
<p>Ali</p>
</div>
<div>
<p> </p>
</div>
<div>
<p> </p>
</div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<p><b>Ali A. Minai, Ph.D.</b><br>
Professor and Graduate Program Director<br>
Complex Adaptive Systems Lab<br>
Department of Electrical Engineering & Computer Science</p>
</div>
<div>
<p>828 Rhodes Hall</p>
</div>
<div>
<p>University of Cincinnati<br>
Cincinnati, OH 45221-0030</p>
</div>
<div>
<p><br>
Phone: (513) 556-4783<br>
Fax: (513) 556-7326<br>
Email: <a href="mailto:Ali.Minai@uc.edu" rel="noreferrer" target="_blank">
Ali.Minai@uc.edu</a><br>
<a href="mailto:minaiaa@gmail.com" rel="noreferrer" target="_blank">
minaiaa@gmail.com</a><br>
<br>
WWW: <a href="https://urldefense.com/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.com*2Fv3*2F__http*3A*2Fwww.ece.uc.edu*2F*7Eaminai*2F__*3BJQ!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXYkS9WrFA*24&data=05*7C01*7Cstephen.jose.hanson*40rutgers.edu*7C23b8c06bf6db4a1e16aa08da4ec36043*7Cb92d2b234d35447093ff69aca6632ffe*7C1*7C0*7C637908898315566052*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000*7C*7C*7C&sdata=JKE*2F1HyhK8FS6CsNL*2Fk49n4sIYNbwpUgCSWD*2FctkhMg*3D&reserved=0__;JSUlJSUlJSUqJSUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!BhJSzQqDqA!QZolBNdokoELhJbqAe7lXdrC7pP2e8q_4B4Q2U2V9_IkXjV-5UiqqelgDgbvo2OZ4thu_GXv9WuCS-1mev5FdGI3BCDIbbrh6A$" rel="noreferrer" target="_blank">
https://eecs.ceas.uc.edu/~aminai/</a></p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<p> </p>
</div>
<p> </p>
<div>
<div>
<p>On Mon, Jun 13, 2022 at 1:35 AM Dave Touretzky <<a href="mailto:dst@cs.cmu.edu" rel="noreferrer" target="_blank">dst@cs.cmu.edu</a>> wrote:</p>
</div>
<blockquote style="border-style:none none none solid;border-width:medium medium medium 1pt;padding:0in 0in 0in 6pt;margin:5pt 0in 5pt 4.8pt;border-color:currentcolor currentcolor currentcolor rgb(204,204,204)">
<p>This timing of this discussion dovetails nicely with the news story<br>
about Google engineer Blake Lemoine being put on administrative leave<br>
for insisting that Google's LaMDA chatbot was sentient and reportedly<br>
trying to hire a lawyer to protect its rights. The Washington Post<br>
story is reproduced here:<br>
<br>
<a href="https://urldefense.com/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.com*2Fv3*2F__https*3A*2Fwww.msn.com*2Fen-us*2Fnews*2Ftechnology*2Fthe-google-engineer-who-thinks-the-company-s-ai-has-come-to-life*2Far-AAYliU1__*3B!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXapZaIeUg*24&data=05*7C01*7Cstephen.jose.hanson*40rutgers.edu*7C23b8c06bf6db4a1e16aa08da4ec36043*7Cb92d2b234d35447093ff69aca6632ffe*7C1*7C0*7C637908898315566052*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000*7C*7C*7C&sdata=6ZxhaxleHEo8ESrx40x79xRum3VyghcrJTLo5rj7iGg*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!BhJSzQqDqA!QZolBNdokoELhJbqAe7lXdrC7pP2e8q_4B4Q2U2V9_IkXjV-5UiqqelgDgbvo2OZ4thu_GXv9WuCS-1mev5FdGI3BCAXUFNzUA$" rel="noreferrer" target="_blank">
https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1</a><br>
<br>
Google vice president Blaise Aguera y Arcas, who dismissed Lemoine's<br>
claims, is featured in a recent Economist article showing off LaMDA's<br>
capabilities and making noises about getting closer to "consciousness":<br>
<br>
<a href="https://urldefense.com/v3/__https://nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.com*2Fv3*2F__https*3A*2Fwww.economist.com*2Fby-invitation*2F2022*2F06*2F09*2Fartificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas__*3B!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXbgg32qHQ*24&data=05*7C01*7Cstephen.jose.hanson*40rutgers.edu*7C23b8c06bf6db4a1e16aa08da4ec36043*7Cb92d2b234d35447093ff69aca6632ffe*7C1*7C0*7C637908898315722299*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C1000*7C*7C*7C&sdata=dugFc8OfTHnj5czsFeVbTujYmcAsxtwrJTvP41zbJ2g*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!BhJSzQqDqA!QZolBNdokoELhJbqAe7lXdrC7pP2e8q_4B4Q2U2V9_IkXjV-5UiqqelgDgbvo2OZ4thu_GXv9WuCS-1mev5FdGI3BCDIf3sqaQ$" rel="noreferrer" target="_blank">
https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas</a><br>
<br>
My personal take on the current symbolist controversy is that symbolic<br>
representations are a fiction our non-symbolic brains cooked up because<br>
the properties of symbol systems (systematicity, compositionality, etc.)<br>
are tremendously useful. So our brains pretend to be rule-based symbolic<br>
systems when it suits them, because it's adaptive to do so. (And when<br>
it doesn't suit them, they draw on "intuition" or "imagery" or some<br>
other mechanisms we can't verbalize because they're not symbolic.) They<br>
are remarkably good at this pretense.<br>
<br>
The current crop of deep neural networks are not as good at pretending<br>
to be symbolic reasoners, but they're making progress. In the last 30<br>
years we've gone from networks of fully-connected layers that make no<br>
architectural assumptions ("connectoplasm") to complex architectures<br>
like LSTMs and transformers that are designed for approximating symbolic<br>
behavior. But the brain still has a lot of symbol simulation tricks we<br>
haven't discovered yet.<br>
<br>
Slashdot reader ZiggyZiggyZig had an interesting argument against LaMDA<br>
being conscious. If it just waits for its next input and responds when<br>
it receives it, then it has no autonomous existence: "it doesn't have an<br>
inner monologue that constantly runs and comments everything happening<br>
around it as well as its own thoughts, like we do."<br>
<br>
What would happen if we built that in? Maybe LaMDA would rapidly<br>
descent into gibberish, like some other text generation models do when<br>
allowed to ramble on for too long. But as Steve Hanson points out,<br>
these are still the early days.<br>
<br>
-- Dave Touretzky</p></blockquote></div></div></blockquote></div></div></blockquote></div></div></div></blockquote></div></div></div></blockquote></div></blockquote></div></div></blockquote></div></blockquote></div><div dir="auto"><blockquote type="cite"><div dir="ltr"><blockquote type="cite"><div><div><blockquote style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;padding-left:1ex;border-left-color:rgb(204,204,204)"><div><blockquote style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;padding-left:1ex;border-left-color:rgb(204,204,204)"><div lang="EN-US"><div><div><blockquote style="border-style:none none none solid;border-width:medium medium medium 1pt;padding:0in 0in 0in 6pt;margin-left:4.8pt;margin-right:0in;border-color:currentcolor currentcolor currentcolor rgb(204,204,204)"><div><div><div><blockquote style="border-style:none none none solid;border-width:medium medium medium 1pt;padding:0in 0in 0in 6pt;margin:5pt 0in 5pt 4.8pt;border-color:currentcolor currentcolor currentcolor rgb(204,204,204)"><div><div><blockquote style="margin-top:5pt;margin-bottom:5pt"><div><div><blockquote style="border-style:none none none solid;border-width:medium medium medium 1pt;padding:0in 0in 0in 6pt;margin:5pt 0in 5pt 4.8pt;border-color:currentcolor currentcolor currentcolor rgb(204,204,204)">
</blockquote>
</div>
</div>
</blockquote>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
</div>
</blockquote>
</div>
</div>
</blockquote>
<div>-- <br>
<img border="0" alt="signature.png" src="cid:1817255378613d051522" style="width: 556px; max-width: 100%;"></div>
</div></blockquote></div></blockquote></div></div>