<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body text="#000000" bgcolor="#ecca99">
    <p><font size="+1"><font face="monospace">No Yoshua has *not* joined
          you ---Explicit processes, memory, problem solving. .are not
          Symbolic per se.   <br>
        </font></font></p>
    <p><font size="+1"><font face="monospace">These original
          distinctions in memory and learning were  from Endel Tulving
          and of course there are brain structures that support the
          distinctions.<br>
        </font></font></p>
    <p><font size="+1"><font face="monospace">and Yoshua is clear about
          that in discussions I had with him in AIHUB<br>
        </font></font></p>
    <p><font size="+1"><font face="monospace">He's definitely not
          looking to create some hybrid approach..</font></font></p>
    <p><font size="+1"><font face="monospace">Steve</font></font><br>
    </p>
    <div class="moz-cite-prefix">On 6/13/22 8:36 AM, Gary Marcus wrote:<br>
    </div>
    <blockquote type="cite"
      cite="mid:5B9E3497-5C1A-450B-A311-12C3122FDCC7@nyu.edu">
      <meta http-equiv="content-type" content="text/html; charset=UTF-8">
      <div dir="ltr">Cute phrase, but what does “symbolist quagmire”
        mean? Once upon  atime, Dave and Geoff were both pioneers in
        trying to getting symbols and neural nets to live in harmony.
        Don’t we still need do that, and if not, why not?</div>
      <div dir="ltr"><br>
      </div>
      <div dir="ltr">Surely, at the very least</div>
      <div dir="ltr">- we want our AI to be able to take advantage of
        the (large) fraction of world knowledge that is represented in
        symbolic form (language, including unstructured text, logic,
        math, programming etc)</div>
      <div dir="ltr">- any model of the human mind ought be able to
        explain how humans can so effectively communicate via the
        symbols of language and how trained humans can deal with (to the
        extent that can) logic, math, programming, etc</div>
      <div dir="ltr"><br>
      </div>
      <div dir="ltr">Folks like Bengio have joined me in seeing the need
        for “System II” processes. That’s a bit of a rough
        approximation, but I don’t see how we get to either AI or
        satisfactory models of the mind without confronting the
        “quagmire”</div>
      <div dir="ltr"><br>
      </div>
      <div dir="ltr"><br>
        <blockquote type="cite">On Jun 13, 2022, at 00:31, Ali Minai
          <a class="moz-txt-link-rfc2396E" href="mailto:minaiaa@gmail.com"><minaiaa@gmail.com></a> wrote:<br>
          <br>
        </blockquote>
      </div>
      <blockquote type="cite">
        <div dir="ltr">
          <div dir="ltr">
            <div>"....
              symbolic representations are a fiction our non-symbolic
              brains cooked up because the properties of symbol systems
              (systematicity, compositionality, etc.) are tremendously
              useful.  So our brains pretend to be rule-based symbolic
              systems when it suits them, because it's adaptive to do
              so."</div>
            <div><br>
            </div>
            <div>Spot on, Dave! We should not wade back into the
              symbolist quagmire, but do need to figure out how
              apparently symbolic processing can be done by neural
              systems. Models like those of Eliasmith and Smolensky
              provide some insight, but still seem far from both
              biological plausibility and real-world scale.</div>
            <div><br>
            </div>
            <div>Best</div>
            <div><br>
            </div>
            <div>Ali<br>
            </div>
            <div><br>
            </div>
            <div><br>
            </div>
            <div>
              <div dir="ltr" class="gmail_signature"
                data-smartmail="gmail_signature">
                <div dir="ltr">
                  <div>
                    <div dir="ltr">
                      <div>
                        <div dir="ltr">
                          <div>
                            <div dir="ltr">
                              <div>
                                <div dir="ltr">
                                  <div>
                                    <div dir="ltr">
                                      <div><b>Ali A. Minai, Ph.D.</b><br>
                                        Professor and Graduate Program
                                        Director<br>
                                        Complex Adaptive Systems Lab<br>
                                        Department of Electrical
                                        Engineering & Computer
                                        Science<br>
                                      </div>
                                      <div>828 Rhodes Hall<br>
                                      </div>
                                      <div>University of Cincinnati<br>
                                        Cincinnati, OH 45221-0030<br>
                                      </div>
                                      <div><br>
                                        Phone: (513) 556-4783<br>
                                        Fax: (513) 556-7326<br>
                                        Email: <a
                                          href="mailto:Ali.Minai@uc.edu"
                                          target="_blank"
                                          moz-do-not-send="true">Ali.Minai@uc.edu</a><br>
                                                  <a
                                          href="mailto:minaiaa@gmail.com"
                                          target="_blank"
                                          moz-do-not-send="true">minaiaa@gmail.com</a><br>
                                        <br>
                                        WWW: <a
href="https://urldefense.com/v3/__http://www.ece.uc.edu/*7Eaminai/__;JQ!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXYkS9WrFA$"
                                          target="_blank"
                                          moz-do-not-send="true">https://eecs.ceas.uc.edu/~aminai/</a></div>
                                    </div>
                                  </div>
                                </div>
                              </div>
                            </div>
                          </div>
                        </div>
                      </div>
                    </div>
                  </div>
                </div>
              </div>
            </div>
            <br>
          </div>
          <br>
          <div class="gmail_quote">
            <div dir="ltr" class="gmail_attr">On Mon, Jun 13, 2022 at
              1:35 AM Dave Touretzky <<a href="mailto:dst@cs.cmu.edu"
                moz-do-not-send="true">dst@cs.cmu.edu</a>> wrote:<br>
            </div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
              0.8ex;border-left:1px solid
              rgb(204,204,204);padding-left:1ex">This timing of this
              discussion dovetails nicely with the news story<br>
              about Google engineer Blake Lemoine being put on
              administrative leave<br>
              for insisting that Google's LaMDA chatbot was sentient and
              reportedly<br>
              trying to hire a lawyer to protect its rights.  The
              Washington Post<br>
              story is reproduced here:<br>
              <br>
                <a
href="https://urldefense.com/v3/__https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1__;!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXapZaIeUg$"
                rel="noreferrer" target="_blank" moz-do-not-send="true">https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1</a><br>
              <br>
              Google vice president Blaise Aguera y Arcas, who dismissed
              Lemoine's<br>
              claims, is featured in a recent Economist article showing
              off LaMDA's<br>
              capabilities and making noises about getting closer to
              "consciousness":<br>
              <br>
                <a
href="https://urldefense.com/v3/__https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas__;!!BhJSzQqDqA!UCEp_V8mv7wMFGacqyo0e5J8KbCnjHTDVRykqi1DQgMu87m5dBCpbcV6s4bv6xkTdlkwJmvlIXbgg32qHQ$"
                rel="noreferrer" target="_blank" moz-do-not-send="true">https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas</a><br>
              <br>
              My personal take on the current symbolist controversy is
              that symbolic<br>
              representations are a fiction our non-symbolic brains
              cooked up because<br>
              the properties of symbol systems (systematicity,
              compositionality, etc.)<br>
              are tremendously useful.  So our brains pretend to be
              rule-based symbolic<br>
              systems when it suits them, because it's adaptive to do
              so.  (And when<br>
              it doesn't suit them, they draw on "intuition" or
              "imagery" or some<br>
              other mechanisms we can't verbalize because they're not
              symbolic.)  They<br>
              are remarkably good at this pretense.<br>
              <br>
              The current crop of deep neural networks are not as good
              at pretending<br>
              to be symbolic reasoners, but they're making progress.  In
              the last 30<br>
              years we've gone from networks of fully-connected layers
              that make no<br>
              architectural assumptions ("connectoplasm") to complex
              architectures<br>
              like LSTMs and transformers that are designed for
              approximating symbolic<br>
              behavior.  But the brain still has a lot of symbol
              simulation tricks we<br>
              haven't discovered yet.<br>
              <br>
              Slashdot reader ZiggyZiggyZig had an interesting argument
              against LaMDA<br>
              being conscious.  If it just waits for its next input and
              responds when<br>
              it receives it, then it has no autonomous existence: "it
              doesn't have an<br>
              inner monologue that constantly runs and comments
              everything happening<br>
              around it as well as its own thoughts, like we do."<br>
              <br>
              What would happen if we built that in?  Maybe LaMDA would
              rapidly<br>
              descent into gibberish, like some other text generation
              models do when<br>
              allowed to ramble on for too long.  But as Steve Hanson
              points out,<br>
              these are still the early days.<br>
              <br>
              -- Dave Touretzky<br>
            </blockquote>
          </div>
        </div>
      </blockquote>
    </blockquote>
  </body>
</html>