<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#ecca99">
<p><font size="+1"><font face="monospace">We prefer the
explicit/implicit cognitive psych refs. but System II is not
symbolic.</font></font></p>
<p><font size="+1"><font face="monospace">See the AIHUB conversation
about this.. we discuss this specifically.</font></font></p>
<p><font size="+1"><font face="monospace"><br>
</font></font></p>
<p><font size="+1"><font face="monospace">Steve</font></font></p>
<p><font size="+1"><font face="monospace"></font></font><br>
</p>
<div class="moz-cite-prefix">On 6/13/22 10:00 AM, Gary Marcus wrote:<br>
</div>
<blockquote type="cite"
cite="mid:5FE7AD49-0551-4E83-8530-5DC88337E22A@nyu.edu">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="ltr">Please reread my sentence and reread his recent
work. Bengio has absolutely joined in calling for System II
processes. Sample is his 2019 NeurIPS keynote: <a
href="https://www.newworldai.com/system-1-deep-learning-system-2-deep-learning-yoshua-bengio/"
moz-do-not-send="true">https://www.newworldai.com/system-1-deep-learning-system-2-deep-learning-yoshua-bengio/</a></div>
<div dir="ltr"><br>
</div>
<div dir="ltr">Whether he wants to call it a hybrid approach is
his business but he certainly sees that traditional approaches
are not covering things like causality and abstract
generalization. Maybe he will find a new way, but he recognizes
what has not been covered with existing ways. </div>
<div dir="ltr"><br>
</div>
<div dir="ltr">And he is emphasizing both relationships and out of
distribution learning, just as I have been for a long time. From
his most recent arXiv a few days ago, the first two sentences of
which sounds almost exactly like what I have been saying for
years:</div>
<div dir="ltr"><br>
</div>
<div dir="ltr">
<div class="dateline" style="-webkit-text-size-adjust: auto;
margin: 15px 0px 0px 20px; font-style: italic; font-size:
0.9em; font-family: "Lucida Grande", Helvetica,
Arial, sans-serif;">Submitted on 9 Jun 2022]</div>
<h1 class="title mathjax" style="-webkit-text-size-adjust: auto;
line-height: 27.99359893798828px; margin-block: 12px; margin:
0.25em 0px 12px 20px; margin-inline-start: 20px; font-family:
"Lucida Grande", Helvetica, Arial, sans-serif;
font-size: 1.8em !important;">On Neural Architecture Inductive
Biases for Relational Tasks</h1>
<div class="authors" style="-webkit-text-size-adjust: auto;
margin: 8px 0px 8px 20px; font-size: 1.2em; line-height: 24px;
font-family: "Lucida Grande", Helvetica, Arial,
sans-serif;"><a
href="https://arxiv.org/search/cs?searchtype=author&query=Kerg%2C+G"
style="text-decoration: none; font-size: medium;"
moz-do-not-send="true">Giancarlo Kerg</a>, <a
href="https://arxiv.org/search/cs?searchtype=author&query=Mittal%2C+S"
style="text-decoration: none; font-size: medium;"
moz-do-not-send="true">Sarthak Mittal</a>, <a
href="https://arxiv.org/search/cs?searchtype=author&query=Rolnick%2C+D"
style="text-decoration: none; font-size: medium;"
moz-do-not-send="true">David Rolnick</a>, <a
href="https://arxiv.org/search/cs?searchtype=author&query=Bengio%2C+Y"
style="text-decoration: none; font-size: medium;"
moz-do-not-send="true">Yoshua Bengio</a>, <a
href="https://arxiv.org/search/cs?searchtype=author&query=Richards%2C+B"
style="text-decoration: none; font-size: medium;"
moz-do-not-send="true">Blake Richards</a>, <a
href="https://arxiv.org/search/cs?searchtype=author&query=Lajoie%2C+G"
style="text-decoration: none; font-size: medium;"
moz-do-not-send="true">Guillaume Lajoie</a></div>
<blockquote class="abstract mathjax"
style="-webkit-text-size-adjust: auto; line-height: 1.55;
font-size: 1.05em; margin-block: 14.4px 21.6px; margin-bottom:
21.6px; background-color: white; border-left-width: 0px;
padding: 0px; font-family: "Lucida Grande",
Helvetica, Arial, sans-serif;">Current deep learning
approaches have shown good in-distribution generalization
performance, but struggle with out-of-distribution
generalization. This is especially true in the case of tasks
involving abstract relations like recognizing rules in
sequences, as we find in many intelligence tests. Recent work
has explored how forcing relational representations to remain
distinct from sensory representations, as it seems to be the
case in the brain, can help artificial systems. Building on
this work, we further explore and formalize the advantages
afforded by 'partitioned' representations of relations and
sensory details, and how this inductive bias can help
recompose learned relational structure in newly encountered
settings. We introduce a simple architecture based on
similarity scores which we name Compositional Relational
Network (CoRelNet). Using this model, we investigate a series
of inductive biases that ensure abstract relations are learned
and represented distinctly from sensory data, and explore
their effects on out-of-distribution generalization for a
series of relational psychophysics tasks. We find that simple
architectural choices can outperform existing models in
out-of-distribution generalization. Together, these results
show that partitioning relational representations from other
information streams may be a simple way to augment existing
network architectures' robustness when performing
out-of-distribution relational computations.</blockquote>
<blockquote class="abstract mathjax"
style="-webkit-text-size-adjust: auto; line-height: 1.55;
font-size: 1.05em; margin-block: 14.4px 21.6px; margin-bottom:
21.6px; background-color: white; border-left-width: 0px;
padding: 0px; font-family: "Lucida Grande",
Helvetica, Arial, sans-serif;"><br>
</blockquote>
<blockquote class="abstract mathjax"
style="-webkit-text-size-adjust: auto; line-height: 1.55;
font-size: 1.05em; margin-block: 14.4px 21.6px; margin-bottom:
21.6px; background-color: white; border-left-width: 0px;
padding: 0px; font-family: "Lucida Grande",
Helvetica, Arial, sans-serif;">Kind of scandalous that he
doesn’t ever cite me for having framed that argument, even if
I have repeatedly called his attention to that oversight, but
that’s another story for a day, in which I elaborate on some
Schmidhuber’s observations on history.</blockquote>
</div>
<div dir="ltr"><br>
</div>
<div dir="ltr">Gary</div>
<div dir="ltr"><br>
<blockquote type="cite">On Jun 13, 2022, at 06:44,
<a class="moz-txt-link-abbreviated" href="mailto:jose@rubic.rutgers.edu">jose@rubic.rutgers.edu</a> wrote:<br>
<br>
</blockquote>
</div>
<blockquote type="cite">
<div dir="ltr">
<meta http-equiv="Content-Type" content="text/html;
charset=UTF-8">
<p><font size="+1"><font face="monospace">No Yoshua has *not*
joined you ---Explicit processes, memory, problem
solving. .are not Symbolic per se. <br>
</font></font></p>
<p><font size="+1"><font face="monospace">These original
distinctions in memory and learning were from Endel
Tulving and of course there are brain structures that
support the distinctions.<br>
</font></font></p>
<p><font size="+1"><font face="monospace">and Yoshua is clear
about that in discussions I had with him in AIHUB<br>
</font></font></p>
<p><font size="+1"><font face="monospace">He's definitely not
looking to create some hybrid approach..</font></font></p>
<p><font size="+1"><font face="monospace">Steve</font></font><br>
</p>
<div class="moz-cite-prefix">On 6/13/22 8:36 AM, Gary Marcus
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:5B9E3497-5C1A-450B-A311-12C3122FDCC7@nyu.edu">
<meta http-equiv="content-type" content="text/html;
charset=UTF-8">
<div dir="ltr">Cute phrase, but what does “symbolist
quagmire” mean? Once upon atime, Dave and Geoff were both
pioneers in trying to getting symbols and neural nets to
live in harmony. Don’t we still need do that, and if not,
why not?</div>
<div dir="ltr"><br>
</div>
<div dir="ltr">Surely, at the very least</div>
<div dir="ltr">- we want our AI to be able to take advantage
of the (large) fraction of world knowledge that is
represented in symbolic form (language, including
unstructured text, logic, math, programming etc)</div>
<div dir="ltr">- any model of the human mind ought be able
to explain how humans can so effectively communicate via
the symbols of language and how trained humans can deal
with (to the extent that can) logic, math, programming,
etc</div>
<div dir="ltr"><br>
</div>
<div dir="ltr">Folks like Bengio have joined me in seeing
the need for “System II” processes. That’s a bit of a
rough approximation, but I don’t see how we get to either
AI or satisfactory models of the mind without confronting
the “quagmire”</div>
<div dir="ltr"><br>
</div>
<div dir="ltr"><br>
<blockquote type="cite">On Jun 13, 2022, at 00:31, Ali
Minai <a class="moz-txt-link-rfc2396E"
href="mailto:minaiaa@gmail.com" moz-do-not-send="true"><minaiaa@gmail.com></a>
wrote:<br>
<br>
</blockquote>
</div>
<blockquote type="cite">
<div dir="ltr">
<div dir="ltr">
<div>".... symbolic representations are a fiction our
non-symbolic brains cooked up because the properties
of symbol systems (systematicity, compositionality,
etc.) are tremendously useful. So our brains
pretend to be rule-based symbolic systems when it
suits them, because it's adaptive to do so."</div>
<div><br>
</div>
<div>Spot on, Dave! We should not wade back into the
symbolist quagmire, but do need to figure out how
apparently symbolic processing can be done by neural
systems. Models like those of Eliasmith and
Smolensky provide some insight, but still seem far
from both biological plausibility and real-world
scale.</div>
<div><br>
</div>
<div>Best</div>
<div><br>
</div>
<div>Ali<br>
</div>
<div><br>
</div>
<div><br>
</div>
<div>
<div dir="ltr" class="gmail_signature"
data-smartmail="gmail_signature">
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div><b>Ali A. Minai, Ph.D.</b><br>
Professor and Graduate
Program Director<br>
Complex Adaptive Systems
Lab<br>
Department of Electrical
Engineering & Computer
Science<br>
</div>
<div>828 Rhodes Hall<br>
</div>
<div>University of
Cincinnati<br>
Cincinnati, OH 45221-0030<br>
</div>
<div><br>
Phone: (513) 556-4783<br>
Fax: (513) 556-7326<br>
Email: <a
href="mailto:Ali.Minai@uc.edu"
target="_blank"
moz-do-not-send="true">Ali.Minai@uc.edu</a><br>
<a
href="mailto:minaiaa@gmail.com"
target="_blank"
moz-do-not-send="true">minaiaa@gmail.com</a><br>
<br>
WWW: <a
href="https://urldefense.com/v3/__http://www.ece.uc.edu/*7Eaminai/__;JQ!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXYkS9WrFA$"
target="_blank"
moz-do-not-send="true">https://eecs.ceas.uc.edu/~aminai/</a></div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<br>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Mon, Jun 13, 2022
at 1:35 AM Dave Touretzky <<a
href="mailto:dst@cs.cmu.edu"
moz-do-not-send="true">dst@cs.cmu.edu</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px
0px 0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">This timing of
this discussion dovetails nicely with the news story<br>
about Google engineer Blake Lemoine being put on
administrative leave<br>
for insisting that Google's LaMDA chatbot was
sentient and reportedly<br>
trying to hire a lawyer to protect its rights. The
Washington Post<br>
story is reproduced here:<br>
<br>
<a
href="https://urldefense.com/v3/__https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1__;!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXapZaIeUg$"
rel="noreferrer" target="_blank"
moz-do-not-send="true">https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1</a><br>
<br>
Google vice president Blaise Aguera y Arcas, who
dismissed Lemoine's<br>
claims, is featured in a recent Economist article
showing off LaMDA's<br>
capabilities and making noises about getting closer
to "consciousness":<br>
<br>
<a
href="https://urldefense.com/v3/__https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas__;!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXbgg32qHQ$"
rel="noreferrer" target="_blank"
moz-do-not-send="true">https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas</a><br>
<br>
My personal take on the current symbolist
controversy is that symbolic<br>
representations are a fiction our non-symbolic
brains cooked up because<br>
the properties of symbol systems (systematicity,
compositionality, etc.)<br>
are tremendously useful. So our brains pretend to
be rule-based symbolic<br>
systems when it suits them, because it's adaptive to
do so. (And when<br>
it doesn't suit them, they draw on "intuition" or
"imagery" or some<br>
other mechanisms we can't verbalize because they're
not symbolic.) They<br>
are remarkably good at this pretense.<br>
<br>
The current crop of deep neural networks are not as
good at pretending<br>
to be symbolic reasoners, but they're making
progress. In the last 30<br>
years we've gone from networks of fully-connected
layers that make no<br>
architectural assumptions ("connectoplasm") to
complex architectures<br>
like LSTMs and transformers that are designed for
approximating symbolic<br>
behavior. But the brain still has a lot of symbol
simulation tricks we<br>
haven't discovered yet.<br>
<br>
Slashdot reader ZiggyZiggyZig had an interesting
argument against LaMDA<br>
being conscious. If it just waits for its next
input and responds when<br>
it receives it, then it has no autonomous existence:
"it doesn't have an<br>
inner monologue that constantly runs and comments
everything happening<br>
around it as well as its own thoughts, like we do."<br>
<br>
What would happen if we built that in? Maybe LaMDA
would rapidly<br>
descent into gibberish, like some other text
generation models do when<br>
allowed to ramble on for too long. But as Steve
Hanson points out,<br>
these are still the early days.<br>
<br>
-- Dave Touretzky<br>
</blockquote>
</div>
</div>
</blockquote>
</blockquote>
</div>
</blockquote>
</blockquote>
</body>
</html>