<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body text="#000000" bgcolor="#ecca99">
    <p><font size="+1"><font face="monospace">Nope.  But lets take this
          offline as one of us is confused.</font></font><br>
    </p>
    <div class="moz-cite-prefix">On 6/13/22 1:58 PM, Gary Marcus wrote:<br>
    </div>
    <blockquote type="cite"
      cite="mid:10485004-EEC1-429D-9123-5F1075AB7444@nyu.edu">
      <meta http-equiv="content-type" content="text/html; charset=UTF-8">
      <div dir="ltr">I think you are conflating Bengio’s views with
        Kahneman’s</div>
      <div dir="ltr"><br>
      </div>
      <div dir="ltr">Bengio wants to have a System I, which he thinks is
        not the same as System II. He doesn’t want System II to be
        symbol-based, but he does want to do many things that symbols
        have historically done. That is an ambition, and we can see how
        it goes. My impression is he is on a road towards recapitulating
        a lot of historically symbolic tools, such as key-value pairs
        and operations that work over their pairs. We will see where he
        gets to; it’s an interesting projects.</div>
      <div dir="ltr"><br>
      </div>
      <div dir="ltr">Kahneman coined the terms; I prefer to call them
        Reflexive and Deliberative. In my view deliberation of that sort
        requires symbols. For what it’s worth Kahneman was enormously
        sympathetic (both publicly and in an email) to my paper the Next
        Decade in AI, in which I argued that one needed a neurosymbolic
        system with rich knowledge, and reasoning over detailed
        cognitive models. </div>
      <div dir="ltr"><br>
      </div>
      <div dir="ltr">It’s all an empirical question as to what can be
        done. </div>
      <div dir="ltr"><br>
      </div>
      <div dir="ltr">I guess “he” refers below to Bengio, but not to
        Kahneman who originated the System I/II distinction. Danny is
        open about how these things cache out, and would also be the
        first to tell you that the distinction is just a rough one, in
        any event.</div>
      <div dir="ltr"><br>
      </div>
      <div dir="ltr">Gary</div>
      <div dir="ltr"><br>
        <blockquote type="cite">On Jun 13, 2022, at 10:37,
          <a class="moz-txt-link-abbreviated" href="mailto:jose@rubic.rutgers.edu">jose@rubic.rutgers.edu</a> wrote:<br>
          <br>
        </blockquote>
      </div>
      <blockquote type="cite">
        <div dir="ltr">
          <meta http-equiv="Content-Type" content="text/html;
            charset=UTF-8">
          <p><font size="+1"><font face="monospace">Well. your
                conclusion is based on some hearsay and a talk he gave,
                I talked with him directly and we discussed what</font></font></p>
          <p><font size="+1"><font face="monospace">you are calling
                SystemII which just means explicit memory/learning to me
                and him.. he has no intention of incorporating anything
                like symbols or</font></font></p>
          <p><font size="+1"><font face="monospace">hybrid Neural/Symbol
                systems..    he does intend on modeling conscious symbol
                manipulation. more in the way Dave T. outlined.<br>
              </font></font></p>
          <p><font size="+1"><font face="monospace">AND, I'm sure if he
                was seeing this.. he would say... "Steve's right".</font></font></p>
          <p><font size="+1"><font face="monospace">Steve</font></font><br>
          </p>
          <div class="moz-cite-prefix">On 6/13/22 1:10 PM, Gary Marcus
            wrote:<br>
          </div>
          <blockquote type="cite"
            cite="mid:73794971-57E3-42E8-9465-2E669B8E951C@nyu.edu">
            <meta http-equiv="content-type" content="text/html;
              charset=UTF-8">
            <div dir="ltr">I don’t think i need to read your
              conversation to have serious doubts about your conclusion,
              but feel free to reprise the arguments here.   </div>
            <div dir="ltr"><br>
              <blockquote type="cite">On Jun 13, 2022, at 08:44, <a
                  class="moz-txt-link-abbreviated"
                  href="mailto:jose@rubic.rutgers.edu"
                  moz-do-not-send="true">jose@rubic.rutgers.edu</a>
                wrote:<br>
                <br>
              </blockquote>
            </div>
            <blockquote type="cite">
              <div dir="ltr">
                <meta http-equiv="Content-Type" content="text/html;
                  charset=UTF-8">
                <p><font size="+1"><font face="monospace">We prefer the
                      explicit/implicit cognitive psych refs. but System
                      II is not symbolic.</font></font></p>
                <p><font size="+1"><font face="monospace">See the AIHUB
                      conversation about this.. we discuss this
                      specifically.</font></font></p>
                <p><font size="+1"><font face="monospace"><br>
                    </font></font></p>
                <p><font size="+1"><font face="monospace">Steve</font></font></p>
                <p><br>
                </p>
                <div class="moz-cite-prefix">On 6/13/22 10:00 AM, Gary
                  Marcus wrote:<br>
                </div>
                <blockquote type="cite"
                  cite="mid:5FE7AD49-0551-4E83-8530-5DC88337E22A@nyu.edu">
                  <meta http-equiv="content-type" content="text/html;
                    charset=UTF-8">
                  <div dir="ltr">Please reread my sentence and reread
                    his recent work. Bengio has absolutely joined in
                    calling for System II processes. Sample is his 2019
                    NeurIPS keynote: <a
href="https://urldefense.com/v3/__https://www.newworldai.com/system-1-deep-learning-system-2-deep-learning-yoshua-bengio/__;!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEVOgyztpc$"
                      moz-do-not-send="true">https://www.newworldai.com/system-1-deep-learning-system-2-deep-learning-yoshua-bengio/</a></div>
                  <div dir="ltr"><br>
                  </div>
                  <div dir="ltr">Whether he wants to call it a hybrid
                    approach is his business but he certainly sees that
                    traditional approaches are not covering things like
                    causality and abstract generalization. Maybe he will
                    find a new way, but he recognizes what has not been
                    covered with existing ways. </div>
                  <div dir="ltr"><br>
                  </div>
                  <div dir="ltr">And he is emphasizing both
                    relationships and out of distribution learning, just
                    as I have been for a long time. From his most recent
                    arXiv a few days ago, the first two sentences of
                    which sounds almost exactly like what I have been
                    saying for years:</div>
                  <div dir="ltr"><br>
                  </div>
                  <div dir="ltr">
                    <div class="dateline"
                      style="-webkit-text-size-adjust: auto; margin:
                      15px 0px 0px 20px; font-style: italic; font-size:
                      0.9em; font-family: "Lucida Grande",
                      Helvetica, Arial, sans-serif;">Submitted on 9 Jun
                      2022]</div>
                    <h1 class="title mathjax"
                      style="-webkit-text-size-adjust: auto;
                      line-height: 27.99359893798828px; margin-block:
                      12px; margin: 0.25em 0px 12px 20px;
                      margin-inline-start: 20px; font-family:
                      "Lucida Grande", Helvetica, Arial,
                      sans-serif; font-size: 1.8em !important;">On
                      Neural Architecture Inductive Biases for
                      Relational Tasks</h1>
                    <div class="authors"
                      style="-webkit-text-size-adjust: auto; margin: 8px
                      0px 8px 20px; font-size: 1.2em; line-height: 24px;
                      font-family: "Lucida Grande", Helvetica,
                      Arial, sans-serif;"><a
href="https://urldefense.com/v3/__https://arxiv.org/search/cs?searchtype=author&query=Kerg*2C*G__;JSs!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEV3gZmAsw$"
                        style="text-decoration: none; font-size:
                        medium;" moz-do-not-send="true">Giancarlo Kerg</a>, <a
href="https://urldefense.com/v3/__https://arxiv.org/search/cs?searchtype=author&query=Mittal*2C*S__;JSs!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEVLC65Ftc$"
                        style="text-decoration: none; font-size:
                        medium;" moz-do-not-send="true">Sarthak Mittal</a>, <a
href="https://urldefense.com/v3/__https://arxiv.org/search/cs?searchtype=author&query=Rolnick*2C*D__;JSs!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEVsXExRpc$"
                        style="text-decoration: none; font-size:
                        medium;" moz-do-not-send="true">David Rolnick</a>, <a
href="https://urldefense.com/v3/__https://arxiv.org/search/cs?searchtype=author&query=Bengio*2C*Y__;JSs!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEVTTRf_9g$"
                        style="text-decoration: none; font-size:
                        medium;" moz-do-not-send="true">Yoshua Bengio</a>, <a
href="https://urldefense.com/v3/__https://arxiv.org/search/cs?searchtype=author&query=Richards*2C*B__;JSs!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEVnyKkuNY$"
                        style="text-decoration: none; font-size:
                        medium;" moz-do-not-send="true">Blake Richards</a>, <a
href="https://urldefense.com/v3/__https://arxiv.org/search/cs?searchtype=author&query=Lajoie*2C*G__;JSs!!BhJSzQqDqA!XG4zEf0hOZijhGBf_sFhhbkQzKlArmTaaBCbKV2h_BBa3TSeO_Be99dqthIiW9gcQf1n4qpT0YBNFXEVa03VLYM$"
                        style="text-decoration: none; font-size:
                        medium;" moz-do-not-send="true">Guillaume Lajoie</a></div>
                    <blockquote class="abstract mathjax"
                      style="-webkit-text-size-adjust: auto;
                      line-height: 1.55; font-size: 1.05em;
                      margin-block: 14.4px 21.6px; margin-bottom:
                      21.6px; background-color: white;
                      border-left-width: 0px; padding: 0px; font-family:
                      "Lucida Grande", Helvetica, Arial,
                      sans-serif;">Current deep learning approaches have
                      shown good in-distribution generalization
                      performance, but struggle with out-of-distribution
                      generalization. This is especially true in the
                      case of tasks involving abstract relations like
                      recognizing rules in sequences, as we find in many
                      intelligence tests. Recent work has explored how
                      forcing relational representations to remain
                      distinct from sensory representations, as it seems
                      to be the case in the brain, can help artificial
                      systems. Building on this work, we further explore
                      and formalize the advantages afforded by
                      'partitioned' representations of relations and
                      sensory details, and how this inductive bias can
                      help recompose learned relational structure in
                      newly encountered settings. We introduce a simple
                      architecture based on similarity scores which we
                      name Compositional Relational Network (CoRelNet).
                      Using this model, we investigate a series of
                      inductive biases that ensure abstract relations
                      are learned and represented distinctly from
                      sensory data, and explore their effects on
                      out-of-distribution generalization for a series of
                      relational psychophysics tasks. We find that
                      simple architectural choices can outperform
                      existing models in out-of-distribution
                      generalization. Together, these results show that
                      partitioning relational representations from other
                      information streams may be a simple way to augment
                      existing network architectures' robustness when
                      performing out-of-distribution relational
                      computations.</blockquote>
                    <blockquote class="abstract mathjax"
                      style="-webkit-text-size-adjust: auto;
                      line-height: 1.55; font-size: 1.05em;
                      margin-block: 14.4px 21.6px; margin-bottom:
                      21.6px; background-color: white;
                      border-left-width: 0px; padding: 0px; font-family:
                      "Lucida Grande", Helvetica, Arial,
                      sans-serif;"><br>
                    </blockquote>
                    <blockquote class="abstract mathjax"
                      style="-webkit-text-size-adjust: auto;
                      line-height: 1.55; font-size: 1.05em;
                      margin-block: 14.4px 21.6px; margin-bottom:
                      21.6px; background-color: white;
                      border-left-width: 0px; padding: 0px; font-family:
                      "Lucida Grande", Helvetica, Arial,
                      sans-serif;">Kind of scandalous that he doesn’t
                      ever cite me for having framed that argument, even
                      if I have repeatedly called his attention to that
                      oversight, but that’s another story for a day, in
                      which I elaborate on some Schmidhuber’s
                      observations on history.</blockquote>
                  </div>
                  <div dir="ltr"><br>
                  </div>
                  <div dir="ltr">Gary</div>
                  <div dir="ltr"><br>
                    <blockquote type="cite">On Jun 13, 2022, at 06:44, <a
                        class="moz-txt-link-abbreviated"
                        href="mailto:jose@rubic.rutgers.edu"
                        moz-do-not-send="true">jose@rubic.rutgers.edu</a>
                      wrote:<br>
                      <br>
                    </blockquote>
                  </div>
                  <blockquote type="cite">
                    <div dir="ltr">
                      <meta http-equiv="Content-Type"
                        content="text/html; charset=UTF-8">
                      <p><font size="+1"><font face="monospace">No
                            Yoshua has *not* joined you ---Explicit
                            processes, memory, problem solving. .are not
                            Symbolic per se.   <br>
                          </font></font></p>
                      <p><font size="+1"><font face="monospace">These
                            original distinctions in memory and learning
                            were  from Endel Tulving and of course there
                            are brain structures that support the
                            distinctions.<br>
                          </font></font></p>
                      <p><font size="+1"><font face="monospace">and
                            Yoshua is clear about that in discussions I
                            had with him in AIHUB<br>
                          </font></font></p>
                      <p><font size="+1"><font face="monospace">He's
                            definitely not looking to create some hybrid
                            approach..</font></font></p>
                      <p><font size="+1"><font face="monospace">Steve</font></font><br>
                      </p>
                      <div class="moz-cite-prefix">On 6/13/22 8:36 AM,
                        Gary Marcus wrote:<br>
                      </div>
                      <blockquote type="cite"
                        cite="mid:5B9E3497-5C1A-450B-A311-12C3122FDCC7@nyu.edu">
                        <meta http-equiv="content-type"
                          content="text/html; charset=UTF-8">
                        <div dir="ltr">Cute phrase, but what does
                          “symbolist quagmire” mean? Once upon  atime,
                          Dave and Geoff were both pioneers in trying to
                          getting symbols and neural nets to live in
                          harmony. Don’t we still need do that, and if
                          not, why not?</div>
                        <div dir="ltr"><br>
                        </div>
                        <div dir="ltr">Surely, at the very least</div>
                        <div dir="ltr">- we want our AI to be able to
                          take advantage of the (large) fraction of
                          world knowledge that is represented in
                          symbolic form (language, including
                          unstructured text, logic, math, programming
                          etc)</div>
                        <div dir="ltr">- any model of the human mind
                          ought be able to explain how humans can so
                          effectively communicate via the symbols of
                          language and how trained humans can deal with
                          (to the extent that can) logic, math,
                          programming, etc</div>
                        <div dir="ltr"><br>
                        </div>
                        <div dir="ltr">Folks like Bengio have joined me
                          in seeing the need for “System II” processes.
                          That’s a bit of a rough approximation, but I
                          don’t see how we get to either AI or
                          satisfactory models of the mind without
                          confronting the “quagmire”</div>
                        <div dir="ltr"><br>
                        </div>
                        <div dir="ltr"><br>
                          <blockquote type="cite">On Jun 13, 2022, at
                            00:31, Ali Minai <a
                              class="moz-txt-link-rfc2396E"
                              href="mailto:minaiaa@gmail.com"
                              moz-do-not-send="true"><minaiaa@gmail.com></a>
                            wrote:<br>
                            <br>
                          </blockquote>
                        </div>
                        <blockquote type="cite">
                          <div dir="ltr">
                            <div dir="ltr">
                              <div>".... symbolic representations are a
                                fiction our non-symbolic brains cooked
                                up because the properties of symbol
                                systems (systematicity,
                                compositionality, etc.) are tremendously
                                useful.  So our brains pretend to be
                                rule-based symbolic systems when it
                                suits them, because it's adaptive to do
                                so."</div>
                              <div><br>
                              </div>
                              <div>Spot on, Dave! We should not wade
                                back into the symbolist quagmire, but do
                                need to figure out how apparently
                                symbolic processing can be done by
                                neural systems. Models like those of
                                Eliasmith and Smolensky provide some
                                insight, but still seem far from both
                                biological plausibility and real-world
                                scale.</div>
                              <div><br>
                              </div>
                              <div>Best</div>
                              <div><br>
                              </div>
                              <div>Ali<br>
                              </div>
                              <div><br>
                              </div>
                              <div><br>
                              </div>
                              <div>
                                <div dir="ltr" class="gmail_signature"
                                  data-smartmail="gmail_signature">
                                  <div dir="ltr">
                                    <div>
                                      <div dir="ltr">
                                        <div>
                                          <div dir="ltr">
                                            <div>
                                              <div dir="ltr">
                                                <div>
                                                  <div dir="ltr">
                                                    <div>
                                                      <div dir="ltr">
                                                        <div><b>Ali A.
                                                          Minai, Ph.D.</b><br>
                                                          Professor and
                                                          Graduate
                                                          Program
                                                          Director<br>
                                                          Complex
                                                          Adaptive
                                                          Systems Lab<br>
                                                          Department of
                                                          Electrical
                                                          Engineering
                                                          & Computer
                                                          Science<br>
                                                        </div>
                                                        <div>828 Rhodes
                                                          Hall<br>
                                                        </div>
                                                        <div>University
                                                          of Cincinnati<br>
                                                          Cincinnati, OH
                                                          45221-0030<br>
                                                        </div>
                                                        <div><br>
                                                          Phone: (513)
                                                          556-4783<br>
                                                          Fax: (513)
                                                          556-7326<br>
                                                          Email: <a
                                                          href="mailto:Ali.Minai@uc.edu"
target="_blank" moz-do-not-send="true">Ali.Minai@uc.edu</a><br>
                                                                    <a
href="mailto:minaiaa@gmail.com" target="_blank" moz-do-not-send="true">minaiaa@gmail.com</a><br>
                                                          <br>
                                                          WWW: <a
href="https://urldefense.com/v3/__http://www.ece.uc.edu/*7Eaminai/__;JQ!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXYkS9WrFA$"
target="_blank" moz-do-not-send="true">https://eecs.ceas.uc.edu/~aminai/</a></div>
                                                      </div>
                                                    </div>
                                                  </div>
                                                </div>
                                              </div>
                                            </div>
                                          </div>
                                        </div>
                                      </div>
                                    </div>
                                  </div>
                                </div>
                              </div>
                              <br>
                            </div>
                            <br>
                            <div class="gmail_quote">
                              <div dir="ltr" class="gmail_attr">On Mon,
                                Jun 13, 2022 at 1:35 AM Dave Touretzky
                                <<a href="mailto:dst@cs.cmu.edu"
                                  moz-do-not-send="true">dst@cs.cmu.edu</a>>
                                wrote:<br>
                              </div>
                              <blockquote class="gmail_quote"
                                style="margin:0px 0px 0px
                                0.8ex;border-left:1px solid
                                rgb(204,204,204);padding-left:1ex">This
                                timing of this discussion dovetails
                                nicely with the news story<br>
                                about Google engineer Blake Lemoine
                                being put on administrative leave<br>
                                for insisting that Google's LaMDA
                                chatbot was sentient and reportedly<br>
                                trying to hire a lawyer to protect its
                                rights.  The Washington Post<br>
                                story is reproduced here:<br>
                                <br>
                                  <a
href="https://urldefense.com/v3/__https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1__;!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXapZaIeUg$"
                                  rel="noreferrer" target="_blank"
                                  moz-do-not-send="true">https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1</a><br>
                                <br>
                                Google vice president Blaise Aguera y
                                Arcas, who dismissed Lemoine's<br>
                                claims, is featured in a recent
                                Economist article showing off LaMDA's<br>
                                capabilities and making noises about
                                getting closer to "consciousness":<br>
                                <br>
                                  <a
href="https://urldefense.com/v3/__https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas__;!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXbgg32qHQ$"
                                  rel="noreferrer" target="_blank"
                                  moz-do-not-send="true">https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas</a><br>
                                <br>
                                My personal take on the current
                                symbolist controversy is that symbolic<br>
                                representations are a fiction our
                                non-symbolic brains cooked up because<br>
                                the properties of symbol systems
                                (systematicity, compositionality, etc.)<br>
                                are tremendously useful.  So our brains
                                pretend to be rule-based symbolic<br>
                                systems when it suits them, because it's
                                adaptive to do so.  (And when<br>
                                it doesn't suit them, they draw on
                                "intuition" or "imagery" or some<br>
                                other mechanisms we can't verbalize
                                because they're not symbolic.)  They<br>
                                are remarkably good at this pretense.<br>
                                <br>
                                The current crop of deep neural networks
                                are not as good at pretending<br>
                                to be symbolic reasoners, but they're
                                making progress.  In the last 30<br>
                                years we've gone from networks of
                                fully-connected layers that make no<br>
                                architectural assumptions
                                ("connectoplasm") to complex
                                architectures<br>
                                like LSTMs and transformers that are
                                designed for approximating symbolic<br>
                                behavior.  But the brain still has a lot
                                of symbol simulation tricks we<br>
                                haven't discovered yet.<br>
                                <br>
                                Slashdot reader ZiggyZiggyZig had an
                                interesting argument against LaMDA<br>
                                being conscious.  If it just waits for
                                its next input and responds when<br>
                                it receives it, then it has no
                                autonomous existence: "it doesn't have
                                an<br>
                                inner monologue that constantly runs and
                                comments everything happening<br>
                                around it as well as its own thoughts,
                                like we do."<br>
                                <br>
                                What would happen if we built that in? 
                                Maybe LaMDA would rapidly<br>
                                descent into gibberish, like some other
                                text generation models do when<br>
                                allowed to ramble on for too long.  But
                                as Steve Hanson points out,<br>
                                these are still the early days.<br>
                                <br>
                                -- Dave Touretzky<br>
                              </blockquote>
                            </div>
                          </div>
                        </blockquote>
                      </blockquote>
                    </div>
                  </blockquote>
                </blockquote>
              </div>
            </blockquote>
          </blockquote>
        </div>
      </blockquote>
    </blockquote>
  </body>
</html>