<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body text="#000000" bgcolor="#ecca99">
    <p><font size="+1" face="monospace">I agree David, Ali, this is s
        succinct way of putting the neuroscience/cognitive problem.<br>
      </font></p>
    <p><font size="+1" face="monospace">It also underlies the very
        reason why "hybrid" systems or approaches in the end makes no
        sense.<br>
      </font></p>
    <p><font size="+1">I think, on the other hand, the rush to
        consciousness of transformers and the laMDA (Lemoine's "friend"
        in his computer) is a also a need to capture symbol processing
        just through claims of human-like performance without the 
        serious toil this will take in the future.</font></p>
    <p><font size="+1">Again, I think a relevant project here  would be
        to attempt to replicate with DL-rnn, Yang and Piatiadosi's PNAS
        language learning system--which is a completely symbolic-- and
        very general over the Chomsky-Miller grammer classes.   Let me
        know, happy to collaborate on something like this.<br>
      </font></p>
    <p><font size="+1">Best</font></p>
    <p><font size="+1">Steve<br>
      </font></p>
    <p><font size="+1"></font><br>
    </p>
    <div class="moz-cite-prefix">On 6/13/22 2:31 AM, Ali Minai wrote:<br>
    </div>
    <blockquote type="cite"
cite="mid:CABG3s4tP+7+fC215Map996ptQb3YKPk_z+KQGQoQ+bLRAyYrhw@mail.gmail.com">
      <meta http-equiv="content-type" content="text/html; charset=UTF-8">
      <div dir="ltr">
        <div>"....
          symbolic representations are a fiction our non-symbolic brains
          cooked up because the properties of symbol systems
          (systematicity, compositionality, etc.) are tremendously
          useful.  So our brains pretend to be rule-based symbolic
          systems when it suits them, because it's adaptive to do so."</div>
        <div><br>
        </div>
        <div>Spot on, Dave! We should not wade back into the symbolist
          quagmire, but do need to figure out how apparently symbolic
          processing can be done by neural systems. Models like those of
          Eliasmith and Smolensky provide some insight, but still seem
          far from both biological plausibility and real-world scale.</div>
        <div><br>
        </div>
        <div>Best</div>
        <div><br>
        </div>
        <div>Ali<br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div>
          <div dir="ltr" class="gmail_signature"
            data-smartmail="gmail_signature">
            <div dir="ltr">
              <div>
                <div dir="ltr">
                  <div>
                    <div dir="ltr">
                      <div>
                        <div dir="ltr">
                          <div>
                            <div dir="ltr">
                              <div>
                                <div dir="ltr">
                                  <div><b>Ali A. Minai, Ph.D.</b><br>
                                    Professor and Graduate Program
                                    Director<br>
                                    Complex Adaptive Systems Lab<br>
                                    Department of Electrical Engineering
                                    & Computer Science<br>
                                  </div>
                                  <div>828 Rhodes Hall<br>
                                  </div>
                                  <div>University of Cincinnati<br>
                                    Cincinnati, OH 45221-0030<br>
                                  </div>
                                  <div><br>
                                    Phone: (513) 556-4783<br>
                                    Fax: (513) 556-7326<br>
                                    Email: <a
                                      href="mailto:Ali.Minai@uc.edu"
                                      target="_blank"
                                      moz-do-not-send="true">Ali.Minai@uc.edu</a><br>
                                              <a
                                      href="mailto:minaiaa@gmail.com"
                                      target="_blank"
                                      moz-do-not-send="true">minaiaa@gmail.com</a><br>
                                    <br>
                                    WWW: <a
                                      href="http://www.ece.uc.edu/%7Eaminai/"
                                      target="_blank"
                                      moz-do-not-send="true">https://eecs.ceas.uc.edu/~aminai/</a></div>
                                </div>
                              </div>
                            </div>
                          </div>
                        </div>
                      </div>
                    </div>
                  </div>
                </div>
              </div>
            </div>
          </div>
        </div>
        <br>
      </div>
      <br>
      <div class="gmail_quote">
        <div dir="ltr" class="gmail_attr">On Mon, Jun 13, 2022 at 1:35
          AM Dave Touretzky <<a href="mailto:dst@cs.cmu.edu"
            moz-do-not-send="true">dst@cs.cmu.edu</a>> wrote:<br>
        </div>
        <blockquote class="gmail_quote" style="margin:0px 0px 0px
          0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">This
          timing of this discussion dovetails nicely with the news story<br>
          about Google engineer Blake Lemoine being put on
          administrative leave<br>
          for insisting that Google's LaMDA chatbot was sentient and
          reportedly<br>
          trying to hire a lawyer to protect its rights.  The Washington
          Post<br>
          story is reproduced here:<br>
          <br>
            <a
href="https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1"
            rel="noreferrer" target="_blank" moz-do-not-send="true">https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1</a><br>
          <br>
          Google vice president Blaise Aguera y Arcas, who dismissed
          Lemoine's<br>
          claims, is featured in a recent Economist article showing off
          LaMDA's<br>
          capabilities and making noises about getting closer to
          "consciousness":<br>
          <br>
            <a
href="https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas"
            rel="noreferrer" target="_blank" moz-do-not-send="true">https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas</a><br>
          <br>
          My personal take on the current symbolist controversy is that
          symbolic<br>
          representations are a fiction our non-symbolic brains cooked
          up because<br>
          the properties of symbol systems (systematicity,
          compositionality, etc.)<br>
          are tremendously useful.  So our brains pretend to be
          rule-based symbolic<br>
          systems when it suits them, because it's adaptive to do so. 
          (And when<br>
          it doesn't suit them, they draw on "intuition" or "imagery" or
          some<br>
          other mechanisms we can't verbalize because they're not
          symbolic.)  They<br>
          are remarkably good at this pretense.<br>
          <br>
          The current crop of deep neural networks are not as good at
          pretending<br>
          to be symbolic reasoners, but they're making progress.  In the
          last 30<br>
          years we've gone from networks of fully-connected layers that
          make no<br>
          architectural assumptions ("connectoplasm") to complex
          architectures<br>
          like LSTMs and transformers that are designed for
          approximating symbolic<br>
          behavior.  But the brain still has a lot of symbol simulation
          tricks we<br>
          haven't discovered yet.<br>
          <br>
          Slashdot reader ZiggyZiggyZig had an interesting argument
          against LaMDA<br>
          being conscious.  If it just waits for its next input and
          responds when<br>
          it receives it, then it has no autonomous existence: "it
          doesn't have an<br>
          inner monologue that constantly runs and comments everything
          happening<br>
          around it as well as its own thoughts, like we do."<br>
          <br>
          What would happen if we built that in?  Maybe LaMDA would
          rapidly<br>
          descent into gibberish, like some other text generation models
          do when<br>
          allowed to ramble on for too long.  But as Steve Hanson points
          out,<br>
          these are still the early days.<br>
          <br>
          -- Dave Touretzky<br>
        </blockquote>
      </div>
    </blockquote>
  </body>
</html>