<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#ecca99">
<p><font size="+1"><font face="monospace">Ali, agreed with all, very
nicely stated. One thing.. though, I started out 45 years
ago studying animal behavior for exactly the reasons you
outline below --thought it might be possible to bootstrap up..
but connectionism in the 80s seemed to suggest there were
common elements in computational analysis and models, that
were not so restricted by species specific behavior.. but more
in terms of brain complexity.. and here we are 30 years
later.. with AI finally coming into focus.. as neural blobs.</font></font></p>
<p><font size="+1"><font face="monospace">Not clear what happens
next. I am pretty sure it won't be the symbolic quagmire
again. <br>
</font></font></p>
<p><font size="+1"><font face="monospace">Steve<br>
</font></font></p>
<div class="moz-cite-prefix">On 6/13/22 2:22 PM, Ali Minai wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CABG3s4sfJPi6knGR8aF_4LMmpSS2E+3Hy78gr-8H8fxK=eq3cw@mail.gmail.com">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="ltr">
<div>Gary and Steve</div>
<div><br>
</div>
<div>My use of the phrase "symbolic quagmire" referred only to
the explicitly symbolic AI models that dominated from the 60s
through the 80s. It was not meant to diminish the importance
of understanding symbolic processing and how a distributed,
self-organizing system like the brain does it. That is
absolutely crucial - as long as we let the systems be
brain-like, and not force-fit them into our abstract views of
symbolic processing (not saying that anyone here is doing
that, but some others are).</div>
<div><br>
</div>
<div>My own - frankly biased and entirely intuitive - opinion is
that once we have a sufficiently brain-like system with the
kind of hierarchical modularity we see in the brain, and
sufficiently brain-like learning mechanisms in all their
aspects (base of evolutionary inductive biases, realized
initially through unsupervised learning, fast RL on top of
these coupled with development, then - later - supervised
learning in a more mature system, learning through internal
rehearsal, learning by prediction mismatch/resonance, use of
coordination modes/synergies, etc., etc.), processing that we
can interpret as symbolic and compositional will emerge
naturally. To this end, we can try to infer neural mechanisms
underlying this from experiments and theory (as Bengio seems
to be doing), but I have a feeling that it will be hard if we
focus only on humans and human-levelprocessing. First, it's
very hard to do controlled experiments at the required
resolution, and second, this is the most complex instance. As
Prof. Smith says in the companion thread, we should ask if
animals too do what we would regard as symbolic processing,
and I think that a case can be made that they do, albeit at a
much simpler level. I have long been fascinated by the data
suggesting that birds - perhaps even fish - have the concept
of numerical order, and even something like a number line. If
we could understand how those simpler brains do it, it might
be easier to bootstrap up to more complex instances.</div>
<div><br>
</div>
<div>Ultimately we'll understand higher cognition by
understanding how it evolved from less complex cognition. For
example, several people have suggested that abstract
representations might be a much more high-dimensional cortical
analogs of 2-dimensional hippocampal place representations
(2-d in rats - maybe higher-d in primates). That would be
consistent with the fact that so much of our abstract
reasoning uses spatial and directional metaphors. Re. System I
and System II, with all due respect to Kahnemann, that is
surely a simplification. If we were to look phylogenetically,
we would see the layered emergence of more and more complex
minds all the way from the Cambrian to now. The binary I and
II division should be replaced by a sequence of systems,
though, as with everything is evolution, there are a few major
punctuations of transformational "enabling technologies", such
as the bilaterian architecture at the start of the Cambrian,
the vertebrate architecture, the hippocampus, and the cortex.
<br>
</div>
<div><br>
</div>
<div>Truly hybrid systems - neural networks working in tandem
with explicitly symbolic systems - might be a short-term route
to addressing specific tasks, but will not give us fundamental
insight. That is exactly the kind or "error" that Gary has so
correctly attributed to much of current machine learning. I
realize that reductionistic analysis and modeling is the
standard way we understand systems scientically, but complex
systems are resistant to such analysis.</div>
<div><br>
</div>
<div>Best</div>
<div>Ali</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div>
<div>
<div dir="ltr" data-smartmail="gmail_signature">
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div><b>Ali A. Minai, Ph.D.</b><br>
Professor and Graduate Program
Director<br>
Complex Adaptive Systems Lab<br>
Department of Electrical
Engineering & Computer Science<br>
</div>
<div>828 Rhodes Hall<br>
</div>
<div>University of Cincinnati<br>
Cincinnati, OH 45221-0030<br>
</div>
<div><br>
Phone: (513) 556-4783<br>
Fax: (513) 556-7326<br>
Email: <a
href="mailto:Ali.Minai@uc.edu"
target="_blank"
moz-do-not-send="true">Ali.Minai@uc.edu</a><br>
<a
href="mailto:minaiaa@gmail.com"
target="_blank"
moz-do-not-send="true">minaiaa@gmail.com</a><br>
<br>
WWW: <a
href="http://www.ece.uc.edu/%7Eaminai/"
target="_blank"
moz-do-not-send="true">https://eecs.ceas.uc.edu/~aminai/</a></div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<br>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Mon, Jun 13, 2022 at 1:37
PM <<a href="mailto:jose@rubic.rutgers.edu" target="_blank"
moz-do-not-send="true">jose@rubic.rutgers.edu</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#ecca99">
<p><font size="+1"><font face="monospace">Well. your
conclusion is based on some hearsay and a talk he
gave, I talked with him directly and we discussed what</font></font></p>
<p><font size="+1"><font face="monospace">you are calling
SystemII which just means explicit memory/learning to
me and him.. he has no intention of incorporating
anything like symbols or</font></font></p>
<p><font size="+1"><font face="monospace">hybrid
Neural/Symbol systems.. he does intend on modeling
conscious symbol manipulation. more in the way Dave T.
outlined.<br>
</font></font></p>
<p><font size="+1"><font face="monospace">AND, I'm sure if
he was seeing this.. he would say... "Steve's right".</font></font></p>
<p><font size="+1"><font face="monospace">Steve</font></font><br>
</p>
<div>On 6/13/22 1:10 PM, Gary Marcus wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">I don’t think i need to read your
conversation to have serious doubts about your
conclusion, but feel free to reprise the arguments here.
</div>
<div dir="ltr"><br>
<blockquote type="cite">On Jun 13, 2022, at 08:44, <a
href="mailto:jose@rubic.rutgers.edu" target="_blank"
moz-do-not-send="true">jose@rubic.rutgers.edu</a>
wrote:<br>
<br>
</blockquote>
</div>
<blockquote type="cite">
<div dir="ltr">
<p><font size="+1"><font face="monospace">We prefer
the explicit/implicit cognitive psych refs. but
System II is not symbolic.</font></font></p>
<p><font size="+1"><font face="monospace">See the
AIHUB conversation about this.. we discuss this
specifically.</font></font></p>
<p><font size="+1"><font face="monospace"><br>
</font></font></p>
<p><font size="+1"><font face="monospace">Steve</font></font></p>
<p><br>
</p>
<div>On 6/13/22 10:00 AM, Gary Marcus wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Please reread my sentence and reread
his recent work. Bengio has absolutely joined in
calling for System II processes. Sample is his
2019 NeurIPS keynote: <a
href="https://urldefense.com/v3/__https://www.newworldai.com/system-1-deep-learning-system-2-deep-learning-yoshua-bengio/__;!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEVOgyztpc$"
target="_blank" moz-do-not-send="true">https://www.newworldai.com/system-1-deep-learning-system-2-deep-learning-yoshua-bengio/</a></div>
<div dir="ltr"><br>
</div>
<div dir="ltr">Whether he wants to call it a hybrid
approach is his business but he certainly sees
that traditional approaches are not covering
things like causality and abstract generalization.
Maybe he will find a new way, but he recognizes
what has not been covered with existing ways. </div>
<div dir="ltr"><br>
</div>
<div dir="ltr">And he is emphasizing both
relationships and out of distribution learning,
just as I have been for a long time. From his most
recent arXiv a few days ago, the first two
sentences of which sounds almost exactly like what
I have been saying for years:</div>
<div dir="ltr"><br>
</div>
<div dir="ltr">
<div>Submitted on 9 Jun 2022]</div>
<h1 style="line-height:27.9936px;margin:0.25em 0px
12px 20px;font-family:"Lucida
Grande",Helvetica,Arial,sans-serif;font-size:1.8em">On
Neural Architecture Inductive Biases for
Relational Tasks</h1>
<div><a
href="https://urldefense.com/v3/__https://arxiv.org/search/cs?searchtype=author&query=Kerg*2C*G__;JSs!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEV3gZmAsw$"
style="text-decoration:none;font-size:medium"
target="_blank" moz-do-not-send="true">Giancarlo
Kerg</a>, <a
href="https://urldefense.com/v3/__https://arxiv.org/search/cs?searchtype=author&query=Mittal*2C*S__;JSs!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEVLC65Ftc$"
style="text-decoration:none;font-size:medium"
target="_blank" moz-do-not-send="true">Sarthak
Mittal</a>, <a
href="https://urldefense.com/v3/__https://arxiv.org/search/cs?searchtype=author&query=Rolnick*2C*D__;JSs!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEVsXExRpc$"
style="text-decoration:none;font-size:medium"
target="_blank" moz-do-not-send="true">David
Rolnick</a>, <a
href="https://urldefense.com/v3/__https://arxiv.org/search/cs?searchtype=author&query=Bengio*2C*Y__;JSs!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEVTTRf_9g$"
style="text-decoration:none;font-size:medium"
target="_blank" moz-do-not-send="true">Yoshua
Bengio</a>, <a
href="https://urldefense.com/v3/__https://arxiv.org/search/cs?searchtype=author&query=Richards*2C*B__;JSs!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEVnyKkuNY$"
style="text-decoration:none;font-size:medium"
target="_blank" moz-do-not-send="true">Blake
Richards</a>, <a
href="https://urldefense.com/v3/__https://arxiv.org/search/cs?searchtype=author&query=Lajoie*2C*G__;JSs!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEVa03VLYM$"
style="text-decoration:none;font-size:medium"
target="_blank" moz-do-not-send="true">Guillaume
Lajoie</a></div>
<blockquote
style="line-height:1.55;font-size:1.05em;margin-bottom:21.6px;background-color:white;border-left-width:0px;padding:0px;font-family:"Lucida
Grande",Helvetica,Arial,sans-serif">Current
deep learning approaches have shown good
in-distribution generalization performance, but
struggle with out-of-distribution
generalization. This is especially true in the
case of tasks involving abstract relations like
recognizing rules in sequences, as we find in
many intelligence tests. Recent work has
explored how forcing relational representations
to remain distinct from sensory representations,
as it seems to be the case in the brain, can
help artificial systems. Building on this work,
we further explore and formalize the advantages
afforded by 'partitioned' representations of
relations and sensory details, and how this
inductive bias can help recompose learned
relational structure in newly encountered
settings. We introduce a simple architecture
based on similarity scores which we name
Compositional Relational Network (CoRelNet).
Using this model, we investigate a series of
inductive biases that ensure abstract relations
are learned and represented distinctly from
sensory data, and explore their effects on
out-of-distribution generalization for a series
of relational psychophysics tasks. We find that
simple architectural choices can outperform
existing models in out-of-distribution
generalization. Together, these results show
that partitioning relational representations
from other information streams may be a simple
way to augment existing network architectures'
robustness when performing out-of-distribution
relational computations.</blockquote>
<blockquote
style="line-height:1.55;font-size:1.05em;margin-bottom:21.6px;background-color:white;border-left-width:0px;padding:0px;font-family:"Lucida
Grande",Helvetica,Arial,sans-serif"><br>
</blockquote>
<blockquote
style="line-height:1.55;font-size:1.05em;margin-bottom:21.6px;background-color:white;border-left-width:0px;padding:0px;font-family:"Lucida
Grande",Helvetica,Arial,sans-serif">Kind of
scandalous that he doesn’t ever cite me for
having framed that argument, even if I have
repeatedly called his attention to that
oversight, but that’s another story for a day,
in which I elaborate on some Schmidhuber’s
observations on history.</blockquote>
</div>
<div dir="ltr"><br>
</div>
<div dir="ltr">Gary</div>
<div dir="ltr"><br>
<blockquote type="cite">On Jun 13, 2022, at 06:44,
<a href="mailto:jose@rubic.rutgers.edu"
target="_blank" moz-do-not-send="true">jose@rubic.rutgers.edu</a>
wrote:<br>
<br>
</blockquote>
</div>
<blockquote type="cite">
<div dir="ltr">
<p><font size="+1"><font face="monospace">No
Yoshua has *not* joined you ---Explicit
processes, memory, problem solving. .are
not Symbolic per se. <br>
</font></font></p>
<p><font size="+1"><font face="monospace">These
original distinctions in memory and
learning were from Endel Tulving and of
course there are brain structures that
support the distinctions.<br>
</font></font></p>
<p><font size="+1"><font face="monospace">and
Yoshua is clear about that in discussions
I had with him in AIHUB<br>
</font></font></p>
<p><font size="+1"><font face="monospace">He's
definitely not looking to create some
hybrid approach..</font></font></p>
<p><font size="+1"><font face="monospace">Steve</font></font><br>
</p>
<div>On 6/13/22 8:36 AM, Gary Marcus wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Cute phrase, but what does
“symbolist quagmire” mean? Once upon atime,
Dave and Geoff were both pioneers in trying
to getting symbols and neural nets to live
in harmony. Don’t we still need do that, and
if not, why not?</div>
<div dir="ltr"><br>
</div>
<div dir="ltr">Surely, at the very least</div>
<div dir="ltr">- we want our AI to be able to
take advantage of the (large) fraction of
world knowledge that is represented in
symbolic form (language, including
unstructured text, logic, math, programming
etc)</div>
<div dir="ltr">- any model of the human mind
ought be able to explain how humans can so
effectively communicate via the symbols of
language and how trained humans can deal
with (to the extent that can) logic, math,
programming, etc</div>
<div dir="ltr"><br>
</div>
<div dir="ltr">Folks like Bengio have joined
me in seeing the need for “System II”
processes. That’s a bit of a rough
approximation, but I don’t see how we get to
either AI or satisfactory models of the mind
without confronting the “quagmire”</div>
<div dir="ltr"><br>
</div>
<div dir="ltr"><br>
<blockquote type="cite">On Jun 13, 2022, at
00:31, Ali Minai <a
href="mailto:minaiaa@gmail.com"
target="_blank" moz-do-not-send="true"><minaiaa@gmail.com></a>
wrote:<br>
<br>
</blockquote>
</div>
<blockquote type="cite">
<div dir="ltr">
<div dir="ltr">
<div>".... symbolic representations are
a fiction our non-symbolic brains
cooked up because the properties of
symbol systems (systematicity,
compositionality, etc.) are
tremendously useful. So our brains
pretend to be rule-based symbolic
systems when it suits them, because
it's adaptive to do so."</div>
<div><br>
</div>
<div>Spot on, Dave! We should not wade
back into the symbolist quagmire, but
do need to figure out how apparently
symbolic processing can be done by
neural systems. Models like those of
Eliasmith and Smolensky provide some
insight, but still seem far from both
biological plausibility and real-world
scale.</div>
<div><br>
</div>
<div>Best</div>
<div><br>
</div>
<div>Ali<br>
</div>
<div><br>
</div>
<div><br>
</div>
<div>
<div dir="ltr">
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div><b>Ali A.
Minai, Ph.D.</b><br>
Professor and
Graduate
Program
Director<br>
Complex
Adaptive
Systems Lab<br>
Department of
Electrical
Engineering
& Computer
Science<br>
</div>
<div>828
Rhodes Hall<br>
</div>
<div>University
of Cincinnati<br>
Cincinnati, OH
45221-0030<br>
</div>
<div><br>
Phone: (513)
556-4783<br>
Fax: (513)
556-7326<br>
Email: <a
href="mailto:Ali.Minai@uc.edu"
target="_blank" moz-do-not-send="true">Ali.Minai@uc.edu</a><br>
<a
href="mailto:minaiaa@gmail.com" target="_blank" moz-do-not-send="true">minaiaa@gmail.com</a><br>
<br>
WWW: <a
href="https://urldefense.com/v3/__http://www.ece.uc.edu/*7Eaminai/__;JQ!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXYkS9WrFA$"
target="_blank" moz-do-not-send="true">https://eecs.ceas.uc.edu/~aminai/</a></div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<br>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On
Mon, Jun 13, 2022 at 1:35 AM Dave
Touretzky <<a
href="mailto:dst@cs.cmu.edu"
target="_blank"
moz-do-not-send="true">dst@cs.cmu.edu</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote"
style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">This
timing of this discussion dovetails
nicely with the news story<br>
about Google engineer Blake Lemoine
being put on administrative leave<br>
for insisting that Google's LaMDA
chatbot was sentient and reportedly<br>
trying to hire a lawyer to protect its
rights. The Washington Post<br>
story is reproduced here:<br>
<br>
<a
href="https://urldefense.com/v3/__https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1__;!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXapZaIeUg$"
rel="noreferrer" target="_blank"
moz-do-not-send="true">https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1</a><br>
<br>
Google vice president Blaise Aguera y
Arcas, who dismissed Lemoine's<br>
claims, is featured in a recent
Economist article showing off LaMDA's<br>
capabilities and making noises about
getting closer to "consciousness":<br>
<br>
<a
href="https://urldefense.com/v3/__https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas__;!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXbgg32qHQ$"
rel="noreferrer" target="_blank"
moz-do-not-send="true">https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas</a><br>
<br>
My personal take on the current
symbolist controversy is that symbolic<br>
representations are a fiction our
non-symbolic brains cooked up because<br>
the properties of symbol systems
(systematicity, compositionality,
etc.)<br>
are tremendously useful. So our
brains pretend to be rule-based
symbolic<br>
systems when it suits them, because
it's adaptive to do so. (And when<br>
it doesn't suit them, they draw on
"intuition" or "imagery" or some<br>
other mechanisms we can't verbalize
because they're not symbolic.) They<br>
are remarkably good at this pretense.<br>
<br>
The current crop of deep neural
networks are not as good at pretending<br>
to be symbolic reasoners, but they're
making progress. In the last 30<br>
years we've gone from networks of
fully-connected layers that make no<br>
architectural assumptions
("connectoplasm") to complex
architectures<br>
like LSTMs and transformers that are
designed for approximating symbolic<br>
behavior. But the brain still has a
lot of symbol simulation tricks we<br>
haven't discovered yet.<br>
<br>
Slashdot reader ZiggyZiggyZig had an
interesting argument against LaMDA<br>
being conscious. If it just waits for
its next input and responds when<br>
it receives it, then it has no
autonomous existence: "it doesn't have
an<br>
inner monologue that constantly runs
and comments everything happening<br>
around it as well as its own thoughts,
like we do."<br>
<br>
What would happen if we built that
in? Maybe LaMDA would rapidly<br>
descent into gibberish, like some
other text generation models do when<br>
allowed to ramble on for too long.
But as Steve Hanson points out,<br>
these are still the early days.<br>
<br>
-- Dave Touretzky<br>
</blockquote>
</div>
</div>
</blockquote>
</blockquote>
</div>
</blockquote>
</blockquote>
</div>
</blockquote>
</blockquote>
</div>
</blockquote>
</div>
</blockquote>
</body>
</html>