<!DOCTYPE html>
<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <div class="moz-cite-prefix">see also "The Roles of Symbols in
      Neural-Based AI: They Are Not What You Think!" by Daniel L.
      Silver, Tom M. Mitchell<br>
      <a class="moz-txt-link-freetext" href="https://ebooks.iospress.nl/volumearticle/63710">https://ebooks.iospress.nl/volumearticle/63710</a><br>
      <br>
      <div class="abstract"> <b>Abstract</b><br>
        <section>
          <p>We propose that symbols are first and foremost external
            communication tools used between intelligent agents that
            allow knowledge to be transferred in a more efficient and
            effective manner than having to experience the world
            directly. But, they are also used internally within an agent
            through a form of self-communication to help formulate,
            describe and justify subsymbolic patterns of neural activity
            that truly implement thinking. Symbols, and our languages
            that make use of them, not only allow us to explain our
            thinking to others and ourselves, but also provide
            beneficial constraints (inductive bias) on learning about
            the world. In this paper we present relevant insights from
            neuroscience and cognitive science, about how the human
            brain represents symbols and the concepts they refer to, and
            how today’s artificial neural networks can do the same. We
            then present a novel neuro-symbolic hypothesis and a
            plausible architecture for intelligent agents that combines
            subsymbolic representations for symbols and concepts for
            learning and reasoning. Our hypothesis and associated
            architecture imply that symbols will remain critical to the
            future of intelligent systems NOT because they are the
            fundamental building blocks of thought, but because they are
            characterizations of subsymbolic processes that constitute
            thought.</p>
        </section>
      </div>
      <br>
      <br>
      Am 07.06.24 um 10:06 schrieb Weng, Juyang:<br>
    </div>
    <blockquote type="cite"
cite="mid:MN2PR12MB45495710FFA1037C67FD596AD0FB2@MN2PR12MB4549.namprd12.prod.outlook.com">
      <div class="elementToProof">
        Dear Asim,</div>
      <div class="elementToProof">
            You wrote, That single cell firing in a cat’s brain having
        “meaning” is not due to “Asim” or “a Government.” These cells
        with “meaning” develop NATURALLY.</div>
      <div class="elementToProof">
           Your statement "single cell firing having meaning" is not
        mathematically meaningful.  The brain is a vector of $10^{14}$
        dimension.  Each neuron corresponds to a dimension.  Each neuron
        does not have a one-to-one correspondence with a symbol (like
        "Asim".   Review the definition of one-to-one correspondence. 
        If you do not mean one-to-one correspondence, your statement is
        not mathematically meaningful.</div>
      <div class="elementToProof">
            Best regards,</div>
      <div class="elementToProof">
        -John Weng</div>
      <hr tabindex="-1">
      <div id="divRplyFwdMsg" dir="ltr"><b>From:</b> Asim Roy
        <a class="moz-txt-link-rfc2396E" href="mailto:ASIM.ROY@asu.edu"><ASIM.ROY@asu.edu></a><br>
        <b>Sent:</b> Thursday, June 6, 2024 11:21 PM<br>
        <b>To:</b> Weng, Juyang <a class="moz-txt-link-rfc2396E" href="mailto:weng@msu.edu"><weng@msu.edu></a>; Stephen José
        Hanson <a class="moz-txt-link-rfc2396E" href="mailto:jose@rubic.rutgers.edu"><jose@rubic.rutgers.edu></a>; Gary Marcus
        <a class="moz-txt-link-rfc2396E" href="mailto:gary.marcus@nyu.edu"><gary.marcus@nyu.edu></a><br>
        <b>Cc:</b> <a class="moz-txt-link-abbreviated" href="mailto:connectionists@mailman.srv.cs.cmu.edu">connectionists@mailman.srv.cs.cmu.edu</a>
        <a class="moz-txt-link-rfc2396E" href="mailto:connectionists@mailman.srv.cs.cmu.edu"><connectionists@mailman.srv.cs.cmu.edu></a><br>
        <b>Subject:</b> RE: Connectionists: short Op-ed to address AI
        problems
        <div> </div>
      </div>
      <div lang="EN-US">
        <div class="x_WordSection1">
          <p class="x_MsoNormal"><span>Dear John,</span></p>
          <p class="x_MsoNormal"><span> </span></p>
          <p class="x_MsoNormal"><span>There is no “Asim” or
              “Government” in any brain, human or otherwise. That single
              cell firing in a cat’s brain having “meaning” is not due
              to “Asim” or “a Government.” These cells with “meaning”
              develop NATURALLY. And that’s what you are missing in your
              Development Network theory. You have not been able to
              capture in your systems that side of development. Perhaps
              time to go back to the drawing board. Symbols follow
              directly from “single cells having meaning.”</span></p>
          <p class="x_MsoNormal"><span> </span></p>
          <p class="x_xmsonormal"><span>All the best,</span></p>
          <p class="x_xmsonormal"><span>Asim Roy</span></p>
          <p class="x_xmsonormal"><span>Professor, Information Systems</span></p>
          <p class="x_xmsonormal"><span>Arizona State University</span></p>
          <p class="x_xmsonormal"><span><a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e="
                target="_blank" moz-do-not-send="true">Lifeboat
                Foundation Bios: Professor Asim Roy</a></span></p>
          <p class="x_xmsonormal"><span><a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e="
                target="_blank" moz-do-not-send="true">Asim Roy |
                iSearch (asu.edu)</a></span></p>
          <p class="x_MsoNormal"><span> </span></p>
          <p class="x_MsoNormal"><span> </span></p>
          <p class="x_MsoNormal"><span> </span></p>
          <p class="x_MsoNormal"><span> </span></p>
          <div>
            <div>
              <p class="x_MsoNormal"><b><span>From:</span></b><span>
                  Weng, Juyang <a class="moz-txt-link-rfc2396E" href="mailto:weng@msu.edu"><weng@msu.edu></a>
                  <br>
                  <b>Sent:</b> Thursday, June 6, 2024 8:09 PM<br>
                  <b>To:</b> Asim Roy <a class="moz-txt-link-rfc2396E" href="mailto:ASIM.ROY@asu.edu"><ASIM.ROY@asu.edu></a>; Stephen
                  José Hanson <a class="moz-txt-link-rfc2396E" href="mailto:jose@rubic.rutgers.edu"><jose@rubic.rutgers.edu></a>; Gary
                  Marcus <a class="moz-txt-link-rfc2396E" href="mailto:gary.marcus@nyu.edu"><gary.marcus@nyu.edu></a><br>
                  <b>Cc:</b> <a class="moz-txt-link-abbreviated" href="mailto:connectionists@mailman.srv.cs.cmu.edu">connectionists@mailman.srv.cs.cmu.edu</a><br>
                  <b>Subject:</b> Re: Connectionists: short Op-ed to
                  address AI problems</span></p>
            </div>
          </div>
          <p class="x_MsoNormal"> </p>
          <div>
            <p class="x_MsoNormal"><span>Dear Asim,</span></p>
          </div>
          <div>
            <p class="x_MsoNormal"><span>   You wrote, "Let’s do one
                issue at a time. Let’s try symbols first."  This
                approach misleads you to the wrong track.</span></p>
          </div>
          <div>
            <p class="x_MsoNormal"><span>   Case 1: neuron level symbols
                (your position).</span></p>
          </div>
          <div>
            <p class="x_MsoNormal"><span>   Case 2: area level symbols.</span></p>
          </div>
          <div>
            <p class="x_MsoNormal"><span>   Case 3: task level symbols.</span></p>
          </div>
          <div>
            <p class="x_MsoNormal"><span>   They are all dead ends
                because Asim is the government of the "brain" model.</span></p>
          </div>
          <div>
            <p class="x_MsoNormal"><span>    For all those Asim knows,
                it is too expensive to create all symbols for the
                "brain" model.   </span></p>
          </div>
          <div>
            <p class="x_MsoNormal"><span>    For all those Asim does not
                know, the model does not know either.</span></p>
          </div>
          <div>
            <p class="x_MsoNormal"><span>    Deadends!  If you continue
                this "one issue at a time route," you waste too much
                time in your life.  This is because the first issue is
                wrong to consider. </span></p>
          </div>
          <div>
            <p class="x_MsoNormal"><span>    Best regards,</span></p>
          </div>
          <div>
            <p class="x_MsoNormal"><span>-John Weng  </span></p>
          </div>
          <div class="x_MsoNormal">
            <hr width="98%">
          </div>
          <div id="x_divRplyFwdMsg">
            <p class="x_MsoNormal"><b><span>From:</span></b><span> Asim
                Roy <<a href="mailto:ASIM.ROY@asu.edu"
                  moz-do-not-send="true" class="moz-txt-link-freetext">ASIM.ROY@asu.edu</a>><br>
                <b>Sent:</b> Thursday, June 6, 2024 10:06 PM<br>
                <b>To:</b> Weng, Juyang <<a
                  href="mailto:weng@msu.edu" moz-do-not-send="true"
                  class="moz-txt-link-freetext">weng@msu.edu</a>>;
                Stephen José Hanson <<a
                  href="mailto:jose@rubic.rutgers.edu"
                  moz-do-not-send="true" class="moz-txt-link-freetext">jose@rubic.rutgers.edu</a>>;
                Gary Marcus <<a href="mailto:gary.marcus@nyu.edu"
                  moz-do-not-send="true" class="moz-txt-link-freetext">gary.marcus@nyu.edu</a>><br>
                <b>Cc:</b> <a
                  href="mailto:connectionists@mailman.srv.cs.cmu.edu"
                  moz-do-not-send="true" class="moz-txt-link-freetext">connectionists@mailman.srv.cs.cmu.edu</a>
                <<a
                  href="mailto:connectionists@mailman.srv.cs.cmu.edu"
                  moz-do-not-send="true" class="moz-txt-link-freetext">connectionists@mailman.srv.cs.cmu.edu</a>><br>
                <b>Subject:</b> RE: Connectionists: short Op-ed to
                address AI problems</span> </p>
            <div>
              <p class="x_MsoNormal"> </p>
            </div>
          </div>
          <div>
            <div>
              <p class="x_xmsonormal"><span>Dear John,</span></p>
              <p class="x_xmsonormal"><span> </span></p>
              <p class="x_xmsonormal"><span>Let’s do one issue at a
                  time. Let’s try symbols first. There is plenty of
                  evidence in neurophysiology that one can associate
                  “meaning” to the activation of certain individual
                  cells. As far as I know, all of the brain-related
                  Nobel prizes were about finding “meaning” in the
                  activations of certain single neurons. Here I quote
                  from Wikipedia (<a
href="https://urldefense.com/v3/__https:/en.wikipedia.org/wiki/Single-unit_recording__;!!HXCxUKc!2pFG0g1tPh-88cfwjJImIxJxtBhaOQ1wWf15ZEUkChi5vUb8q_qEXUDZt7bsQ9QjqzSglNkNR1ZZ9A$"
                    moz-do-not-send="true">Single-unit recording -
                    Wikipedia</a>):</span></p>
              <p class="x_xmsonormal"><span> </span></p>
              <ul type="disc">
                <li class="x_xmsonormal"><span>1928: One of the earliest
                    accounts of being able to record from the nervous
                    system was by <a
href="https://urldefense.com/v3/__https:/en.wikipedia.org/wiki/Edgar_Adrian__;!!HXCxUKc!2pFG0g1tPh-88cfwjJImIxJxtBhaOQ1wWf15ZEUkChi5vUb8q_qEXUDZt7bsQ9QjqzSglNnQtx1LXQ$"
                      title="Edgar Adrian" moz-do-not-send="true"><span>Edgar
                        Adrian</span></a> in his 1928 publication "The
                    Basis of Sensation". In this, he describes his
                    recordings of electrical discharges in
                    <u>single nerve fibers</u> using a <a
href="https://urldefense.com/v3/__https:/en.wikipedia.org/wiki/Lippmann_electrometer__;!!HXCxUKc!2pFG0g1tPh-88cfwjJImIxJxtBhaOQ1wWf15ZEUkChi5vUb8q_qEXUDZt7bsQ9QjqzSglNn6lQGgzA$"
                      title="Lippmann electrometer"
                      moz-do-not-send="true"><span>Lippmann electrometer</span></a>.
                    He won the <span>Nobel Prize in 1932</span> for his
                    work revealing the function of neurons.<sup><a
href="https://urldefense.com/v3/__https:/en.wikipedia.org/wiki/Single-unit_recording*cite_note-11__;Iw!!HXCxUKc!2pFG0g1tPh-88cfwjJImIxJxtBhaOQ1wWf15ZEUkChi5vUb8q_qEXUDZt7bsQ9QjqzSglNk6I8LNhA$"
                        moz-do-not-send="true"><span>[11]</span></a></sup></span></li>
                <li class="x_xmsonormal"><span>1957: <a
href="https://urldefense.com/v3/__https:/en.wikipedia.org/wiki/John_Carew_Eccles__;!!HXCxUKc!2pFG0g1tPh-88cfwjJImIxJxtBhaOQ1wWf15ZEUkChi5vUb8q_qEXUDZt7bsQ9QjqzSglNkFV1ULMA$"
                      title="John Carew Eccles" moz-do-not-send="true"><span>John
                        Eccles</span></a> used intracellular <u>single-unit
                      recording</u> to study synaptic mechanisms in
                    motoneurons (for which he won the
                    <span>Nobel Prize in 1963</span>).</span></li>
                <li class="x_xmsonormal"><span>1959: Studies by <a
href="https://urldefense.com/v3/__https:/en.wikipedia.org/wiki/David_H._Hubel__;!!HXCxUKc!2pFG0g1tPh-88cfwjJImIxJxtBhaOQ1wWf15ZEUkChi5vUb8q_qEXUDZt7bsQ9QjqzSglNkJaa_aew$"
                      title="David H. Hubel" moz-do-not-send="true"><span>David
                        H. Hubel</span></a> and <a
href="https://urldefense.com/v3/__https:/en.wikipedia.org/wiki/Torsten_Wiesel__;!!HXCxUKc!2pFG0g1tPh-88cfwjJImIxJxtBhaOQ1wWf15ZEUkChi5vUb8q_qEXUDZt7bsQ9QjqzSglNndrzwVDg$"
                      title="Torsten Wiesel" moz-do-not-send="true"><span>Torsten
                        Wiesel</span></a>. They used <u>single neuron
                      recordings</u> to map the visual cortex in
                    unanesthesized, unrestrained cats using tungsten
                    electrodes. This work won them the
                    <span>Nobel Prize in 1981</span> for information
                    processing in the visual system.</span></li>
              </ul>
              <p class="x_xmsonormal"><span> </span></p>
              <ul type="disc">
                <li class="x_xmsonormal"><span>And the work of Mosers
                    and O’Keefe on grid and place cells that won them
                    the Nobel:
                    <span><a
href="https://urldefense.com/v3/__https:/www.nobelprize.org/prizes/medicine/2014/press-release/__;!!HXCxUKc!2pFG0g1tPh-88cfwjJImIxJxtBhaOQ1wWf15ZEUkChi5vUb8q_qEXUDZt7bsQ9QjqzSglNm67pI7iQ$"
                        moz-do-not-send="true">The 2014 Nobel Prize in
                        Physiology or Medicine - Press release</a>. </span>Here’s
                    a quote about the work on place cells:</span></li>
              </ul>
              <p class="x_xmsonormal"><span> </span></p>
              <p class="x_xmsonormal"><span>“</span><i><span>Most
                    neuroscientists once doubted that brain activity
                    could be linked with behaviour, but in the late
                    1960s, <strong><span>O</span></strong>’Keefe began
                    to record signals from individual neurons in the
                    brains of rats moving freely in a box. He put
                    electrodes in the hippocampus and was surprised to
                    find that <u>individual cells fired</u> when the
                    rats moved to particular spots</span></i><span>.”</span><span>
                  <a
href="https://urldefense.com/v3/__https:/www.nature.com/articles/514153a*:*:text=Most*20neuroscientists*20once*20doubted*20that*20brain*20activity*20could,fired*20when*20the*20rats*20moved*20to*20particular*20spots.__;I34lJSUlJSUlJSUlJSUlJQ!!HXCxUKc!2pFG0g1tPh-88cfwjJImIxJxtBhaOQ1wWf15ZEUkChi5vUb8q_qEXUDZt7bsQ9QjqzSglNnqimHsFw$"
                    moz-do-not-send="true">
                    Nobel prize for decoding brain’s sense of place |
                    Nature</a></span></p>
              <p class="x_xmsonormal"><span> </span></p>
              <p class="x_xmsonormal"><span>And then the findings about
                  concept cells (Jennifer Aniston cells), which are
                  single cell recordings. Here’s from
                  <a
href="https://urldefense.com/v3/__https:/www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2020.00059/full*B6__;Iw!!HXCxUKc!2pFG0g1tPh-88cfwjJImIxJxtBhaOQ1wWf15ZEUkChi5vUb8q_qEXUDZt7bsQ9QjqzSglNlbw-6x9Q$"
                    moz-do-not-send="true">
                    <span>Reddy and Thorpe (2014)</span></a><span>: “</span><span>concept
                    cells have “<strong><i><u><span>meaning</span></u></i></strong> of
                    a given stimulus in a manner that is <strong><i><span>invariant</span></i></strong> to
                    different representations of that stimulus.”</span></span></p>
              <p class="x_xmsonormal"><span> </span></p>
              <p class="x_xmsonormal"><span>We all try to generalize
                  from data, right. If you examine these findings, the
                  most important feature is that they all found
                  “meaning” in single cell activations. So the most
                  fundamental question for you is: <span>
                    Do you accept these findings and the general
                    conclusion that single cell activations can have
                    meaning</span>? Again, beware that, beyond winning
                  Nobel prizes, much work in neuroscience and other
                  fields follows from these findings.</span></p>
              <p class="x_xmsonormal"><span> </span></p>
              <p class="x_xmsonormal"><span>All the best,</span></p>
              <p class="x_xmsonormal"><span>Asim Roy</span></p>
              <p class="x_xmsonormal"><span>Professor, Information
                  Systems</span></p>
              <p class="x_xmsonormal"><span>Arizona State University</span></p>
              <p class="x_xmsonormal"><span><a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e="
                    target="_blank" moz-do-not-send="true">Lifeboat
                    Foundation Bios: Professor Asim Roy</a></span></p>
              <p class="x_xmsonormal"><span><a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e="
                    target="_blank" moz-do-not-send="true">Asim Roy |
                    iSearch (asu.edu)</a></span></p>
              <p class="x_xmsonormal"><span> </span></p>
              <p class="x_xmsonormal"><span> </span></p>
              <p class="x_xmsonormal"><span> </span></p>
              <div>
                <div>
                  <p class="x_xmsonormal"><b><span>From:</span></b><span>
                      Weng, Juyang <<a href="mailto:weng@msu.edu"
                        moz-do-not-send="true"
                        class="moz-txt-link-freetext">weng@msu.edu</a>>
                      <br>
                      <b>Sent:</b> Thursday, June 6, 2024 1:09 AM<br>
                      <b>To:</b> Asim Roy <<a
                        href="mailto:ASIM.ROY@asu.edu"
                        moz-do-not-send="true"
                        class="moz-txt-link-freetext">ASIM.ROY@asu.edu</a>>;
                      Stephen José Hanson <<a
                        href="mailto:jose@rubic.rutgers.edu"
                        moz-do-not-send="true"
                        class="moz-txt-link-freetext">jose@rubic.rutgers.edu</a>>;
                      Gary Marcus <<a
                        href="mailto:gary.marcus@nyu.edu"
                        moz-do-not-send="true"
                        class="moz-txt-link-freetext">gary.marcus@nyu.edu</a>>;
                      Weng, Juyang <<a href="mailto:weng@msu.edu"
                        moz-do-not-send="true"
                        class="moz-txt-link-freetext">weng@msu.edu</a>><br>
                      <b>Cc:</b> <a
href="mailto:connectionists@mailman.srv.cs.cmu.edu"
                        moz-do-not-send="true"
                        class="moz-txt-link-freetext">connectionists@mailman.srv.cs.cmu.edu</a><br>
                      <b>Subject:</b> Re: Connectionists: short Op-ed to
                      address AI problems</span></p>
                </div>
              </div>
              <p class="x_xmsonormal"> </p>
              <div>
                <p class="x_xmsonormal"><span>Dear Asim,</span></p>
              </div>
              <div>
                <p class="x_xmsonormal"><span>   You wrote, "We are
                    doing neurosymbolic with image processing – the
                    symbolic stuff on top of a DL model. It also brings
                    in the explanation side." </span></p>
              </div>
              <div>
                <p class="x_xmsonormal"><span>   Not only DL is
                    misconduct, but symbols are another devil.  </span></p>
              </div>
              <div>
                <p class="x_xmsonormal"><span>   In my IJCNN 2022
                    paper, </span></p>
              </div>
              <div>
                <p class="x_xmsonormal"><span>   <a
href="https://urldefense.com/v3/__http:/www.cse.msu.edu/*weng/research/20M-IJCNN2022rvsd-cite.pdf__;fg!!IKRxdwAv5BmarQ!YZcFaLmNraAEJLpxRQGKzKZTVt_nn3J9i52_xG7zhEgKn6ZASf_q59sOFVdSPylt7_NueMymM_EI7GNl$"
                      moz-do-not-send="true">http://www.cse.msu.edu/~weng/research/20M-IJCNN2022rvsd-cite.pdf</a></span></p>
              </div>
              <div>
                <p class="x_xmsonormal"><span>   I proved "symbol-free"
                    as one of the 20 million-dollar problems for us to
                    understand human brains.</span></p>
              </div>
              <div>
                <p class="x_xmsonormal"><span>   The definition of
                    symbols requires a government,  but government-free
                    is one of the 20 million-dollar problems for us to
                    understand human brains.</span></p>
              </div>
              <div>
                <p class="x_xmsonormal"><span>   Let us consider three
                    cases:</span></p>
              </div>
              <div>
                <p class="x_xmsonormal"><span>  Case 1:  If a human
                    designs symbols within a network (e.g., LSTM) and
                    assigns the symbols to some individual neurons
                    (e.g., task-specific gates) of the network, this
                    human is a government within the network since he is
                    task-aware.   </span></p>
              </div>
              <div>
                <p class="x_xmsonormal"><span>  Case 2: If a human
                    designs symbols within a network and assigns roles
                    to blocks in a functional block diagram, e.g.,
                    [Starzyk10], this human is a government within the
                    network.   </span></p>
              </div>
              <div>
                <p class="x_xmsonormal"><span>  Case 3: In the symbolic
                    AI school, a human programmer designs symbolic
                    representations for a task that is assigned to a
                    computer program or network.  This human is a
                    government within the symbolic AI system since he is
                    task-aware.   </span></p>
              </div>
              <div>
                <p class="x_xmsonormal"><span>   All the 3 cases do not
                    solve the government-free problem.</span></p>
              </div>
              <div>
                <p class="x_xmsonormal"><span>  I have attached an image
                    that further explains the symbol problem in the same
                    paper.</span></p>
              </div>
              <div>
                <p class="x_xmsonormal"><span>  Let me know if you still
                    do not agree that the brain must be free from
                    symbols after you read the entire paper.</span></p>
              </div>
              <div>
                <p class="x_xmsonormal"><span>   By the way, I am
                    surprised that as a mathematician, you still do not
                    understand the Post-Selection misconduct in DL that
                    I raised to you earlier.  Please use your own words
                    to explain Post-Selection and why you can handle
                    explanation using Post-Selection misconduct.</span></p>
              </div>
              <div>
                <p class="x_xmsonormal"><span>   Best regards,</span></p>
              </div>
              <div>
                <p class="x_xmsonormal"><span>-John Weng</span></p>
              </div>
              <div>
                <p class="x_xmsonormal"><span> </span></p>
              </div>
              <div class="x_MsoNormal">
                <hr width="98%">
              </div>
              <div id="x_x_divRplyFwdMsg">
                <p class="x_xmsonormal"><b><span>From:</span></b><span> Connectionists
                    <<a
href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu"
                      moz-do-not-send="true"
                      class="moz-txt-link-freetext">connectionists-bounces@mailman.srv.cs.cmu.edu</a>>
                    on behalf of Asim Roy <<a
                      href="mailto:ASIM.ROY@asu.edu"
                      moz-do-not-send="true"
                      class="moz-txt-link-freetext">ASIM.ROY@asu.edu</a>><br>
                    <b>Sent:</b> Wednesday, June 5, 2024 6:49 PM<br>
                    <b>To:</b> Stephen José Hanson <<a
                      href="mailto:jose@rubic.rutgers.edu"
                      moz-do-not-send="true"
                      class="moz-txt-link-freetext">jose@rubic.rutgers.edu</a>>;
                    Gary Marcus <<a href="mailto:gary.marcus@nyu.edu"
                      moz-do-not-send="true"
                      class="moz-txt-link-freetext">gary.marcus@nyu.edu</a>><br>
                    <b>Cc:</b> <a
href="mailto:connectionists@mailman.srv.cs.cmu.edu"
                      moz-do-not-send="true"
                      class="moz-txt-link-freetext">connectionists@mailman.srv.cs.cmu.edu</a>
                    <<a
href="mailto:connectionists@mailman.srv.cs.cmu.edu"
                      moz-do-not-send="true"
                      class="moz-txt-link-freetext">connectionists@mailman.srv.cs.cmu.edu</a>><br>
                    <b>Subject:</b> Re: Connectionists: short Op-ed to
                    address AI problems</span> </p>
                <div>
                  <p class="x_xmsonormal"> </p>
                </div>
              </div>
              <div>
                <p><span>Dear Stephen,</span></p>
                <p><span> </span></p>
                <p><span>We are doing neurosymbolic with image
                    processing – the symbolic stuff on top of a DL
                    model. It also brings in the explanation side. The
                    results are astounding. We get better performance
                    than a pure DL model. And exploring applications
                    with defense agencies. They are impressed with the
                    results we have so far. So, neurosymbolic is
                    definitely the way forward.</span></p>
                <p><span> </span></p>
                <p><span>Best,</span></p>
                <p><span>Asim Roy</span></p>
                <p><span>Professor, Information Systems</span></p>
                <p><span>Arizona State University</span></p>
                <p><u><span><a
href="https://urldefense.com/v3/__https:/search.asu.edu/profile/9973__;!!HXCxUKc!1ZzSj6Uim5wWu5W-JiBNqp_Cig3tUkK5DgMhDEBYnERP1f-pOAReghJiHzEk3hEHKL31roB_8qivsA$"
                        moz-do-not-send="true">Asim Roy | ASU Search</a></span></u></p>
                <p><u><span><a
href="https://urldefense.com/v3/__https:/lifeboat.com/ex/bios.asim.roy__;!!HXCxUKc!1ZzSj6Uim5wWu5W-JiBNqp_Cig3tUkK5DgMhDEBYnERP1f-pOAReghJiHzEk3hEHKL31roAnhJU86A$"
                        moz-do-not-send="true">Lifeboat Foundation Bios:
                        Professor Asim Roy</a></span></u></p>
                <p><span> </span></p>
                <div>
                  <p><b><span>From:</span></b><span> Connectionists <<a
href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu"
                        moz-do-not-send="true"
                        class="moz-txt-link-freetext">connectionists-bounces@mailman.srv.cs.cmu.edu</a>>
                      <b>On Behalf Of </b>Stephen José Hanson<br>
                      <b>Sent:</b> Wednesday, June 5, 2024 6:06 AM<br>
                      <b>To:</b> Gary Marcus <<a
                        href="mailto:gary.marcus@nyu.edu"
                        moz-do-not-send="true"
                        class="moz-txt-link-freetext">gary.marcus@nyu.edu</a>><br>
                      <b>Cc:</b> <a
href="mailto:connectionists@mailman.srv.cs.cmu.edu"
                        moz-do-not-send="true"
                        class="moz-txt-link-freetext">connectionists@mailman.srv.cs.cmu.edu</a><br>
                      <b>Subject:</b> Re: Connectionists: short Op-ed to
                      address AI problems</span></p>
                </div>
                <p><span> </span></p>
                <p><span>Dear Flabbergasted:</span></p>
                <p><span>Thankyou, I endeavor to provide short but
                    useful commentary that could be considered a "work
                    of art".  Graci!</span></p>
                <p><span>Now either my memory is failing since 2017(not
                    impossible), or you are smoothing over a time series
                    of claims that are actually like a seesaw.</span></p>
                <p><span>I think if we just rewind some of the
                    connectionist comments; it would be clear, in fact,
                    for example, you had a long series of comments with
                    Geoff that seemed to indicate you were being
                    misreprented as well.  Your complaints have always
                    be around the fact that DL-AI has false alarms (and
                    to be fair other problems)   And sometimes pretty
                    odd-ones.  LLMs human and non-human errors are even
                    more interesting.  The fact that they seem to grow
                    circuits in the attention-heads is gobsmacking!   I
                    thought then and think now you are complaining about
                    peas under a very thick mattress (oh-oh,  metaphors
                    now- I may have opened pandora's box.)</span></p>
                <p><span>But I will go look at the budding NeuroSymbolic
                    paper you mentioned, but I have my doubts that the
                    statistical bias is equivalent with the
                    architectually simplistic LLMs.  Nonetheless, I have
                    not read it.</span></p>
                <p><span>I will also make a  coarse  timeline of your
                    comments since 2017, but anyone out there that would
                    like to help, greatly appreciated!</span></p>
                <p><span>Best,</span></p>
                <p><span>Stephen</span></p>
                <p><span>On 6/5/24 8:41 AM, Gary Marcus wrote:</span></p>
                <blockquote>
                  <p><span>Wow, Stephen, you have outdone yourself. This
                      note is a startling mixture of rude,
                      condescending, inaccurate, and uninformed. A work
                      of art! </span></p>
                  <p><span> </span></p>
                  <p><span>To correct four misunderstandings:</span></p>
                  <p><span>1. Yes, my essay was written before LLMs were
                      popular (though around the time Transformers were
                      proposed as it happens). It was however
                      <i>precisely</i> “  a moonshot idea, that doesn't
                      involve leaving the blackbox in the hands of
                      corporate types who value profits over knowledge.”
                      Please read what I wrote. It’s one page, linked
                      below, and you obviously couldn’t be bothered,.
                      (Parenthetically, I was one of the first people to
                      warn that OpenAI was likely to be problematic,
                       and have done so repeatedly at my Substack.)</span></p>
                  <p><span>2. My argument throughout (back to 2012, in
                      the New Yorker, 2018 in my Deep Learning: A
                      Critical Appraisal, etc) has been that deep
                      learning has some role but cannot solve all
                      things, and that it would be not reliable on its
                      own. In 2019 onwards I emphasized many of the
                      social problems that arise from relying on such
                      unreliable architectures. I have never wavered
                      from any of that. (Again, please read my work
                      before so grossly distorting it.) Unreliable
                      systems that are blind to truth and values can
                      cause harm (bias), be exploited (to create
                      disinformation), etc. There is absolutely no
                      contradiction there, as I have explained numerous
                      times in my writings.</span></p>
                  <p><span>3. It’s truly rude to dismiss an entire field
                      as “flotsam and jetsam”,  and you obviously aren’t
                      following the neurosymbolic literature, e.g., you
                      must have missed DeepMind’s neurosymbolic
                      AlphaGeometry paper, in Nature, with its state of
                      the art results, beating pure neural nets.</span></p>
                  <p><span>4. Again, nothing has changed about my view;
                      your last remark is gratuitous and based on a
                      misunderstanding.</span></p>
                  <p><span> </span></p>
                  <p><span>Truly flabbergasted,</span></p>
                  <p><span>Gary</span></p>
                  <p><span> </span></p>
                  <blockquote>
                    <p><span>On Jun 5, 2024, at 05:18, Stephen José
                        Hanson
                      </span><u><span><a
                            href="mailto:jose@rubic.rutgers.edu"
                            moz-do-not-send="true"><jose@rubic.rutgers.edu></a></span></u><span> wrote:</span></p>
                  </blockquote>
                  <blockquote>
                    <p><span></span></p>
                    <p><span>Gary, this was before the LLM discovery.  
                        Pierre is proposing a moonshot idea, that
                        doesn't involve leaving the blackbox in the
                        hands of corporate types who value profits over
                        knowledge.  OPENAI seems to be flailing and
                        having serious safety and security issues.  It
                        certainly could be recipe for diaster.</span></p>
                    <p><span>Frankly your views have been all over the
                        place.  DL doesn't work, DL could work but
                        should be merged with the useless flotsam and
                        jetsam from GOFAI over the last 50 years, and
                        now they are too dangerous because they work but
                        they are unreliable, like most humans.</span></p>
                    <p><span>Its hard to know what views of yours to
                        take seriously as they seem change so rapidly.  </span></p>
                    <p><span>Cheers</span></p>
                    <p><span>Stephen</span></p>
                    <p><span>On 6/4/24 9:53 AM, Gary Marcus wrote:</span></p>
                    <blockquote>
                      <p><span>I would just point out that I first made
                          this suggestion [CERN for AI] in the New York
                          Times in 2017, and several others have since.
                          There is some effort ongoing to try to make it
                          happen, if you search you will see.</span></p>
                      <p><span> </span></p>
                      <table class="x_MsoNormalTable" width="300">
                        <tbody>
                          <tr>
                            <td>
                              <p><span><30gray-facebookJumbo.jpg></span></p>
                            </td>
                          </tr>
                          <tr>
                            <td>
                              <table class="x_MsoNormalTable"
                                width="300">
                                <tbody>
                                  <tr>
                                    <td>
                                      <div>
                                        <p><u><span><a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.nytimes.com_2017_07_29_opinion_sunday_artificial-2Dintelligence-2Dis-2Dstuck-2Dheres-2Dhow-2Dto-2Dmove-2Dit-2Dforward.html-3Funlocked-5Farticle-5Fcode-3D1.xE0.mcIz.lT-5FK7BZdonGJ-26smid-3Dnytcore-2Dios-2Dshare-26referringSource-3DarticleShare-26u2g-3Di-26sgrp-3Dc-2Dcb&d=DwMGaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=fwBsbQ5xjEJFDg3c0iXuOBcr84mxEGxR0cEG4-hstVM8dJNyq3HVvpCACElUGWT2&s=TGZDkK1TsB_rNyjmal5jG1694upjB2JDhtj3UOe4Cws&e="
                                                moz-do-not-send="true"><span>Opinion
                                                  | Artificial
                                                  Intelligence Is Stuck.
                                                  Here’s How to Move It
                                                  Forward. (Gift
                                                  Article)</span></a></span></u></p>
                                        <p><u><span><a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.nytimes.com_2017_07_29_opinion_sunday_artificial-2Dintelligence-2Dis-2Dstuck-2Dheres-2Dhow-2Dto-2Dmove-2Dit-2Dforward.html-3Funlocked-5Farticle-5Fcode-3D1.xE0.mcIz.lT-5FK7BZdonGJ-26smid-3Dnytcore-2Dios-2Dshare-26referringSource-3DarticleShare-26u2g-3Di-26sgrp-3Dc-2Dcb&d=DwMGaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=fwBsbQ5xjEJFDg3c0iXuOBcr84mxEGxR0cEG4-hstVM8dJNyq3HVvpCACElUGWT2&s=TGZDkK1TsB_rNyjmal5jG1694upjB2JDhtj3UOe4Cws&e="
                                                moz-do-not-send="true"><span>nytimes.com</span></a></span></u></p>
                                      </div>
                                    </td>
                                  </tr>
                                </tbody>
                              </table>
                            </td>
                          </tr>
                        </tbody>
                      </table>
                      <p><span> </span></p>
                      <p><span> </span></p>
                      <blockquote>
                        <p><span>On Jun 3, 2024, at 22:58, Baldi,Pierre
                          </span><u><span><a
                                href="mailto:pfbaldi@ics.uci.edu"
                                moz-do-not-send="true"><pfbaldi@ics.uci.edu></a></span></u><span> wrote:</span></p>
                      </blockquote>
                      <blockquote>
                        <p><span></span><span><br>
                            I would appreciate feedback from this
                            group,especially dissenting feedback,  on
                            the attached Op-ed. You can send it to my
                            personal email which you can find on my
                            university web site if you prefer. The basic
                            idea is simple:<br>
                            <br>
                            IF for scientific, security, or other
                            societal reasons we want academics to
                            develop and study the most advanced forms of
                            AI, I can see only one solution:  create  a
                            national or international effort around the
                            largest data/computing center on Earth with
                            a CERN-like structure comprising permanent
                            staff, and 1000s of affiliated academic
                            laboratories. There are many obstacles, but
                            none is completely insurmountable if we
                            wanted to.<br>
                            <br>
                            Thank you.<br>
                            <br>
                            Pierre<br>
                            <br>
                            <br>
                          </span></p>
                        <p><span><AI-CERN-Baldi2024FF.pdf></span></p>
                      </blockquote>
                    </blockquote>
                    <div>
                      <pre><span>-- </span></pre>
                    </div>
                    <div>
                      <pre><span>Stephen José Hanson</span></pre>
                    </div>
                    <div>
                      <pre><span>Professor of Psychology</span></pre>
                    </div>
                    <div>
                      <pre><span>Director of RUBIC</span></pre>
                    </div>
                    <div>
                      <pre><span>Member of Exc Comm RUCCS</span></pre>
                    </div>
                  </blockquote>
                </blockquote>
                <div>
                  <pre><span>-- </span></pre>
                </div>
                <div>
                  <pre><span>Stephen José Hanson</span></pre>
                </div>
                <div>
                  <pre><span>Professor of Psychology</span></pre>
                </div>
                <div>
                  <pre><span>Director of RUBIC</span></pre>
                </div>
                <div>
                  <pre><span>Member of Exc Comm RUCCS</span></pre>
                </div>
              </div>
            </div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
  </body>
</html>