<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <p>Asim,<br>
    </p>
    <p>I was on the Anchorage panel, and asked others what could be a
      great achievement in computational intelligence. Steve Grossberg
      replied, that symbolic AI is meaningless, but creation of
      artificial rat that could survive in hostile environment would be
      something. Of course this is still difficult, but perhaps DARPA
      autonomous machines are not that far? <br>
    </p>
    <p>I also had similar discussions with Walter and support his
      position: you cannot separate tightly coupled systems. Any
      external influence will create activation in both, linear
      causality looses its meaning. This is clear if both systems adjust
      to each other. But even if only one system learns (brain) and the
      other is mechanical but responds to human actions it may behave as
      one system. Every musician knows that: piano becomes a part of our
      body, responding in so many ways to actions, not only by producing
      sounds but also providing haptic feedback. <br>
    </p>
    <p>This simply means that brains of locked-in people worked in
      somehow different way than brains of healthy people. Why do we
      consider them conscious? Because they can reflect on their mind
      states, imagine things and describe their inner states. If GPT-3
      was coupled with something like DALL-E that creates images from
      text, and could describe what they see in their inner world,
      create some kind of episodic memory, we would have hard time to
      deny that this thing is not conscious of what it has in its mind.
      Embodiment helps to create inner world and changes it, but it is
      not necessary for consciousness. Can we find a good argument that
      such system is not conscious of its own states? It may not have
      all qualities of human consciousness, but that is a matter of more
      detailed approximation of missing functions. <br>
    </p>
    <p> I have made this argument a long time ago (ex. in "<i
        style="color: rgb(0, 0, 0); font-family: arial, helvetica,
        calibri; font-size: medium; font-variant-ligatures: normal;
        font-variant-caps: normal; font-weight: 400; letter-spacing:
        normal; orphans: 2; text-align: left; text-indent: 0px;
        text-transform: none; white-space: normal; widows: 2;
        word-spacing: 0px; -webkit-text-stroke-width: 0px;
        text-decoration-thickness: initial; text-decoration-style:
        initial; text-decoration-color: initial;"><a>Brain-inspired
          conscious computing architecture</a>" </i><span style="color:
        rgb(0, 0, 0); font-family: arial, helvetica, calibri; font-size:
        medium; font-variant-ligatures: normal; font-variant-caps:
        normal; font-weight: 400; letter-spacing: normal; orphans: 2;
        text-align: left; text-indent: 0px; text-transform: none;
        white-space: normal; widows: 2; word-spacing: 0px;
        -webkit-text-stroke-width: 0px; text-decoration-thickness:
        initial; text-decoration-style: initial; text-decoration-color:
        initial;">written over 20 years ago, see more papers on this on
        my web page).  </span><span style="color: rgb(0, 0, 0);
        font-family: arial, helvetica, calibri; font-size: medium;
        font-variant-ligatures: normal; font-variant-caps: normal;
        font-weight: 400; letter-spacing: normal; orphans: 2;
        text-align: left; text-indent: 0px; text-transform: none;
        white-space: normal; widows: 2; word-spacing: 0px;
        -webkit-text-stroke-width: 0px; text-decoration-thickness:
        initial; text-decoration-style: initial; text-decoration-color:
        initial;"></span></p>
    <p>Wlodek</p>
    <p>Prof. Włodzisław Duch<br>
      Fellow, International Neural Network Society<br>
      Past President, European Neural Network Society<br>
      Head, Neurocognitive Laboratory, CMIT NCU, Poland<br>
    </p>
    <p>Google: <a href="http://www.google.com/search?q=Wlodek+Duch">Wlodzislaw
        Duch</a></p>
    <div class="moz-cite-prefix"><br>
      On 18/02/2022 05:22, Asim Roy wrote:<br>
    </div>
    <blockquote type="cite"
cite="mid:BYAPR06MB40697EE6CAD451285234E00E9B379@BYAPR06MB4069.namprd06.prod.outlook.com">
      <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
      <meta name="Generator" content="Microsoft Word 15 (filtered
        medium)">
      <!--[if !mso]><style>v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style><![endif]-->
      <style>@font-face
        {font-family:"Cambria Math";
        panose-1:2 4 5 3 5 4 6 3 2 4;}@font-face
        {font-family:Calibri;
        panose-1:2 15 5 2 2 2 4 3 2 4;}p.MsoNormal, li.MsoNormal, div.MsoNormal
        {margin:0in;
        font-size:11.0pt;
        font-family:"Calibri",sans-serif;}a:link, span.MsoHyperlink
        {mso-style-priority:99;
        color:blue;
        text-decoration:underline;}span.EmailStyle20
        {mso-style-type:personal-reply;
        font-family:"Calibri",sans-serif;
        color:windowtext;}.MsoChpDefault
        {mso-style-type:export-only;
        font-size:10.0pt;}div.WordSection1
        {page:WordSection1;}</style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
      <div class="WordSection1">
        <p class="MsoNormal">In 1998, after our debate about the brain
          at the WCCI in Anchorage, Alaska, I asked Walter Freeman if he
          thought the brain controls the body. His answer was, you can
          also say that the body controls the brain. I then asked him if
          the driver controls a car, or the pilot controls an airplane.
          His answer was the same, that you can also say that the car
          controls the driver, or the plane controls the pilot. I then
          realized that Walter was also a philosopher and believed in
          the No-free Will theory and what he was arguing for is that
          the world is simply made of interacting systems. However, both
          Walter, and his close friend John Taylor, were into
          consciousness.
          <o:p></o:p></p>
        <p class="MsoNormal"><o:p> </o:p></p>
        <p class="MsoNormal">I have argued with Walter on many different
          topics over nearly two decades and have utmost respect for him
          as a scholar, but this first argument I will always remember.<o:p></o:p></p>
        <p class="MsoNormal"><o:p> </o:p></p>
        <p class="MsoNormal">Obviously, there’s a conflict between
          consciousness and the No-free Will theory. Wonder where we
          stand with regard to this conflict.<o:p></o:p></p>
        <p class="MsoNormal"><o:p> </o:p></p>
        <p class="MsoNormal">Asim Roy<o:p></o:p></p>
        <p class="MsoNormal">Professor, Information Systems<o:p></o:p></p>
        <p class="MsoNormal">Arizona State University<o:p></o:p></p>
        <p class="MsoNormal"><a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e="
            target="_blank" moz-do-not-send="true">Lifeboat Foundation
            Bios: Professor Asim Roy</a><o:p></o:p></p>
        <p class="MsoNormal"><a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e="
            target="_blank" moz-do-not-send="true">Asim Roy | iSearch
            (asu.edu)</a><o:p></o:p></p>
        <p class="MsoNormal"><o:p> </o:p></p>
        <p class="MsoNormal"><o:p> </o:p></p>
        <div>
          <div style="border:none;border-top:solid #E1E1E1
            1.0pt;padding:3.0pt 0in 0in 0in">
            <p class="MsoNormal"><b>From:</b> Connectionists
              <a class="moz-txt-link-rfc2396E" href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu"><connectionists-bounces@mailman.srv.cs.cmu.edu></a>
              <b>On Behalf Of </b>Andras Lorincz<br>
              <b>Sent:</b> Tuesday, February 15, 2022 6:50 AM<br>
              <b>To:</b> Stephen José Hanson
              <a class="moz-txt-link-rfc2396E" href="mailto:jose@rubic.rutgers.edu"><jose@rubic.rutgers.edu></a>; Gary Marcus
              <a class="moz-txt-link-rfc2396E" href="mailto:gary.marcus@nyu.edu"><gary.marcus@nyu.edu></a><br>
              <b>Cc:</b> Connectionists
              <a class="moz-txt-link-rfc2396E" href="mailto:Connectionists@cs.cmu.edu"><Connectionists@cs.cmu.edu></a><br>
              <b>Subject:</b> Re: Connectionists: Weird beliefs about
              consciousness<o:p></o:p></p>
          </div>
        </div>
        <p class="MsoNormal"><o:p> </o:p></p>
        <div>
          <p class="MsoNormal"><span
              style="font-size:12.0pt;color:black">Dear Steve and Gary:<o:p></o:p></span></p>
        </div>
        <div>
          <div>
            <p class="MsoNormal"><span
                style="font-size:12.0pt;color:black">This is how I see
                (try to understand) consciousness and the related terms:
                <o:p></o:p></span></p>
            <div>
              <p class="MsoNormal"><span
                  style="font-size:12.0pt;color:black">(Our)
                  consciousness seems to be related to the
                  close-to-deterministic nature of the episodes on from
                  few hundred millisecond to a few second domain.
                  Control instructions may leave our brain 200 ms
                  earlier than the action starts and they become
                  conscious only by that time. In addition, observations
                  of those may also be delayed by a similar amount. (It
                  then follows that the launching of the control actions
                  is not conscious and -- therefore -- free will can be
                  debated in this very limited context.) On the other
                  hand, model-based synchronization is necessary for
                  timely observation, planning, decision making, and
                  execution in a distributed and slow computational
                  system. If this model-based synchronization is not
                  working properly, then the observation of the world
                  breaks and schizophrenic symptoms appear. As an
                  example, individuals with pronounced schizotypal
                  traits are particularly successful in self-tickling
                  (source:
                  <a
href="https://urldefense.com/v3/__https:/philpapers.org/rec/LEMIWP__;!!IKRxdwAv5BmarQ!P1ufmU5XnzpvjxtS2M0AnytlX24RNsoDeNPfsqUNWbF6OU5p9xMqtMj9S3Pn3cY$"
                    moz-do-not-send="true">
                    https://philpapers.org/rec/LEMIWP</a>, and a
                  discussion on Asperger and schizophrenia:
                  <a
href="https://urldefense.com/v3/__https:/www.frontiersin.org/articles/10.3389/fpsyt.2020.503462/full__;!!IKRxdwAv5BmarQ!P1ufmU5XnzpvjxtS2M0AnytlX24RNsoDeNPfsqUNWbF6OU5p9xMqtMj9l5NkQt4$"
                    moz-do-not-send="true">
https://www.frontiersin.org/articles/10.3389/fpsyt.2020.503462/full</a>)
                  a manifestation of improper binding. The internal
                  model enables and the synchronization requires the
                  internal model and thus a certain level of
                  consciousness can appear in a time interval around the
                  actual time instant and its length depends on the
                  short-term memory.<o:p></o:p></span></p>
            </div>
            <p class="MsoNormal"><span
                style="font-size:12.0pt;color:black">Other issues, like
                separating the self from the rest of the world are more
                closely related to the soft/hard style interventions (as
                called in the recent deep learning literature), i.e.,
                those components (features) that can be
                modified/controlled, e.g., color and speed, and the ones
                that are Lego-like and can be
                separated/amputed/occluded/added.<o:p></o:p></span></p>
          </div>
          <div>
            <p class="MsoNormal"><span
                style="font-size:12.0pt;color:black">Best,<o:p></o:p></span></p>
          </div>
          <div>
            <p class="MsoNormal"><span
                style="font-size:12.0pt;color:black">Andras<o:p></o:p></span></p>
          </div>
          <div id="Signature">
            <div>
              <div id="divtagdefaultwrapper">
                <div id="divtagdefaultwrapper">
                  <div name="divtagdefaultwrapper">
                    <div>
                      <div>
                        <div>
                          <div>
                            <p><span style="color:black"><o:p> </o:p></span></p>
                            <p><span style="color:black">------------------------------------<o:p></o:p></span></p>
                            <div>
                              <p class="MsoNormal"><span
                                  style="font-size:12.0pt;color:black">Andras
                                  Lorincz<o:p></o:p></span></p>
                            </div>
                            <div>
                              <p class="MsoNormal"><span
                                  style="font-size:12.0pt;color:black"><a
href="https://urldefense.com/v3/__http:/nipg.inf.elte.hu/__;!!IKRxdwAv5BmarQ!P1ufmU5XnzpvjxtS2M0AnytlX24RNsoDeNPfsqUNWbF6OU5p9xMqtMj9j2LbdH0$"
                                    moz-do-not-send="true">http://nipg.inf.elte.hu/</a><o:p></o:p></span></p>
                            </div>
                            <div>
                              <p class="MsoNormal"><span
                                  style="font-size:12.0pt;color:black">Fellow
                                  of the European Association for
                                  Artificial Intelligence<o:p></o:p></span></p>
                            </div>
                            <div>
                              <p class="MsoNormal"><span
                                  style="font-size:12.0pt;color:black"><a
href="https://urldefense.com/v3/__https:/scholar.google.com/citations?user=EjETXQkAAAAJ&hl=en__;!!IKRxdwAv5BmarQ!P1ufmU5XnzpvjxtS2M0AnytlX24RNsoDeNPfsqUNWbF6OU5p9xMqtMj99i1VRm0$"
                                    moz-do-not-send="true">https://scholar.google.com/citations?user=EjETXQkAAAAJ&hl=en</a><o:p></o:p></span></p>
                            </div>
                            <div>
                              <p class="MsoNormal"><span
                                  style="font-size:12.0pt;color:black">Department
                                  of Artificial Intelligence<o:p></o:p></span></p>
                            </div>
                            <div>
                              <p class="MsoNormal"><span
                                  style="font-size:12.0pt;color:black">Faculty
                                  of Informatics<o:p></o:p></span></p>
                            </div>
                            <div>
                              <p class="MsoNormal"><span
                                  style="font-size:12.0pt;color:black">Eotvos
                                  Lorand University<o:p></o:p></span></p>
                            </div>
                            <p class="MsoNormal"><span
                                style="font-size:12.0pt;color:black">Budapest,
                                Hungary
                                <o:p></o:p></span></p>
                            <p><span style="color:black"><o:p> </o:p></span></p>
                            <p><span style="color:black"><o:p> </o:p></span></p>
                          </div>
                        </div>
                      </div>
                    </div>
                  </div>
                </div>
              </div>
            </div>
          </div>
        </div>
        <div>
          <p class="MsoNormal"><span
              style="font-size:12.0pt;color:black"><o:p> </o:p></span></p>
        </div>
        <div class="MsoNormal" style="text-align:center" align="center">
          <hr width="98%" size="2" align="center">
        </div>
        <div id="divRplyFwdMsg">
          <p class="MsoNormal"><b><span style="color:black">From:</span></b><span
              style="color:black"> Connectionists <<a
                href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu"
                moz-do-not-send="true" class="moz-txt-link-freetext">connectionists-bounces@mailman.srv.cs.cmu.edu</a>>
              on behalf of Stephen José Hanson <<a
                href="mailto:jose@rubic.rutgers.edu"
                moz-do-not-send="true" class="moz-txt-link-freetext">jose@rubic.rutgers.edu</a>><br>
              <b>Sent:</b> Monday, February 14, 2022 8:30 PM<br>
              <b>To:</b> Gary Marcus <<a
                href="mailto:gary.marcus@nyu.edu" moz-do-not-send="true"
                class="moz-txt-link-freetext">gary.marcus@nyu.edu</a>><br>
              <b>Cc:</b> Connectionists <<a
                href="mailto:connectionists@cs.cmu.edu"
                moz-do-not-send="true" class="moz-txt-link-freetext">connectionists@cs.cmu.edu</a>><br>
              <b>Subject:</b> Re: Connectionists: Weird beliefs about
              consciousness</span> <o:p>
            </o:p></p>
          <div>
            <p class="MsoNormal"> <o:p></o:p></p>
          </div>
        </div>
        <div>
          <p style="background:#ECCA99"><span
              style="font-size:13.5pt;color:black">Gary,  these weren't
              criterion.     Let me try again.</span><o:p></o:p></p>
          <p style="background:#ECCA99"><span
              style="font-size:13.5pt;color:black">I wasn't talking
              about wake-sleep cycles... I was talking about being awake
              or asleep and the transition that ensues..</span><o:p></o:p></p>
          <p style="background:#ECCA99"><span
              style="font-size:13.5pt;color:black">Rooba's don't sleep..
              they turn off, I have two of them.  They turn on once (1)
              their batteries are recharged (2) a timer has been set for
              being turned on.</span><o:p></o:p></p>
          <p style="background:#ECCA99"><span
              style="font-size:13.5pt;color:black">GPT3 is essentially a
              CYC that actually works.. by reading Wikipedia (which of
              course is a terribly biased sample).</span><o:p></o:p></p>
          <p style="background:#ECCA99"><span
              style="font-size:13.5pt;color:black">I was indicating the
              difference between implicit and explicit learning/problem
              solving.    Implicit learning/memory is unconscious and
              similar to a habit.. (good or bad).</span><o:p></o:p></p>
          <p style="background:#ECCA99"><span
              style="font-size:13.5pt;color:black">I believe that when
              someone says "is gpt3 conscious?"  they are asking: is
              gpt3 self-aware?      Roombas know about vacuuming and
              they are unconscious.</span><o:p></o:p></p>
          <p style="background:#ECCA99"><span
              style="font-size:13.5pt;color:black">S</span><o:p></o:p></p>
          <div>
            <p class="MsoNormal" style="background:#ECCA99"><span
                style="color:black">On 2/14/22 12:45 PM, Gary Marcus
                wrote:</span><o:p></o:p></p>
          </div>
          <blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
            <div>
              <p class="MsoNormal" style="background:#ECCA99"><span
                  style="color:black">Stephen,</span><o:p></o:p></p>
            </div>
            <div>
              <p class="MsoNormal" style="background:#ECCA99"><o:p> </o:p></p>
            </div>
            <div>
              <p class="MsoNormal" style="background:#ECCA99"><span
                  style="color:black">On criteria (1)-(3), a high-end,
                  mapping-equippped Roomba is far more plausible as a
                  consciousness than GPT-3.</span><o:p></o:p></p>
            </div>
            <div>
              <p class="MsoNormal" style="background:#ECCA99"><o:p> </o:p></p>
            </div>
            <div>
              <p class="MsoNormal" style="background:#ECCA99"><span
                  style="color:black">1. The Roomba has a clearly
                  defined wake-sleep cycle; GPT does not.</span><o:p></o:p></p>
            </div>
            <div>
              <p class="MsoNormal" style="background:#ECCA99"><span
                  style="color:black">2. Roomba makes choices based on
                  an explicit representation of its location relative to
                  a mapped space. GPT lacks any consistent reflection of
                  self; eg if you ask it, as I have, if you are you
                  person, and then ask if it is a computer, it’s liable
                  to say yes to both, showing no stable knowledge of
                  self.</span><o:p></o:p></p>
            </div>
            <div>
              <p class="MsoNormal" style="background:#ECCA99"><span
                  style="color:black">3. Roomba has explicit,
                  declarative knowledge eg of walls and other
                  boundaries, as well its own location. GPT has no
                  systematically interrogable explicit representations.</span><o:p></o:p></p>
            </div>
            <div>
              <p class="MsoNormal" style="background:#ECCA99"><o:p> </o:p></p>
            </div>
            <div>
              <p class="MsoNormal" style="background:#ECCA99"><span
                  style="color:black">All this is said with tongue
                  lodged partway in cheek, but I honestly don’t see what
                  criterion would lead anyone to believe that GPT is a
                  more plausible candidate for consciousness than any
                  other AI program out there. </span><o:p></o:p></p>
            </div>
            <div>
              <p class="MsoNormal" style="background:#ECCA99"><o:p> </o:p></p>
            </div>
            <div>
              <p class="MsoNormal" style="background:#ECCA99"><span
                  style="color:black">ELIZA long ago showed that you
                  could produce fluent speech that was mildly
                  contextually relevant, and even convincing to the
                  untutored; just because GPT is a better version of
                  that trick doesn’t mean it’s any more conscious.</span><o:p></o:p></p>
            </div>
            <div>
              <p class="MsoNormal" style="background:#ECCA99"><o:p> </o:p></p>
            </div>
            <div>
              <p class="MsoNormal" style="background:#ECCA99"><span
                  style="color:black">Gary</span><o:p></o:p></p>
            </div>
            <div>
              <p class="MsoNormal" style="background:#ECCA99"><span
                  style="color:black"><br>
                  <br>
                </span><o:p></o:p></p>
              <blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
                <p class="MsoNormal"
                  style="margin-bottom:12.0pt;background:#ECCA99"><span
                    style="color:black">On Feb 14, 2022, at 08:56,
                    Stephen José Hanson
                    <a href="mailto:jose@rubic.rutgers.edu"
                      moz-do-not-send="true"><jose@rubic.rutgers.edu></a>
                    wrote:</span><o:p></o:p></p>
              </blockquote>
            </div>
            <blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
              <div>
                <p class="MsoNormal" style="background:#ECCA99"><span
                    style="color:black"> </span>
                  <o:p></o:p></p>
                <p style="background:#ECCA99"><span
                    style="font-size:13.5pt;color:black">this is a great
                    list of behavior..
                  </span><o:p></o:p></p>
                <p style="background:#ECCA99"><span
                    style="font-size:13.5pt;color:black">Some
                    biologically might be termed reflexive, taxes,
                    classically conditioned, implicit
                    (memory/learning)... all however would not be<br>
                    conscious in the several senses:  (1)  wakefulness--
                    sleep  (2) self aware (3) explicit/declarative.</span><o:p></o:p></p>
                <p style="background:#ECCA99"><span
                    style="font-size:13.5pt;color:black">I think the
                    term is used very loosely, and I believe what GPT3
                    and other AI are hoping to show signs of is
                    "self-awareness"..</span><o:p></o:p></p>
                <p style="background:#ECCA99"><span
                    style="font-size:13.5pt;color:black">In response to
                    :  "why are you doing that?",  "What are you doing
                    now", "what will you be doing in 2030?"</span><o:p></o:p></p>
                <p style="background:#ECCA99"><span
                    style="font-size:13.5pt;color:black">Steve</span><o:p></o:p></p>
                <p style="background:#ECCA99"><o:p> </o:p></p>
                <div>
                  <p class="MsoNormal" style="background:#ECCA99"><span
                      style="color:black">On 2/14/22 10:46 AM, Iam
                      Palatnik wrote:</span><o:p></o:p></p>
                </div>
                <blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
                  <div>
                    <div>
                      <p class="MsoNormal" style="background:#ECCA99"><span
                          style="color:black">A somewhat related
                          question, just out of curiosity.</span><o:p></o:p></p>
                    </div>
                    <div>
                      <p class="MsoNormal" style="background:#ECCA99"><o:p> </o:p></p>
                    </div>
                    <div>
                      <p class="MsoNormal" style="background:#ECCA99"><span
                          style="color:black">Imagine the following:</span><o:p></o:p></p>
                    </div>
                    <div>
                      <p class="MsoNormal" style="background:#ECCA99"><o:p> </o:p></p>
                    </div>
                    <div>
                      <p class="MsoNormal" style="background:#ECCA99"><span
                          style="color:black">- An automatic solar panel
                          that tracks the position of the sun.</span><o:p></o:p></p>
                    </div>
                    <div>
                      <div>
                        <p class="MsoNormal" style="background:#ECCA99"><span
                            style="color:black">- A group of single
                            celled microbes with phototaxis that follow
                            the sunlight.</span><o:p></o:p></p>
                      </div>
                      <div>
                        <p class="MsoNormal" style="background:#ECCA99"><span
                            style="color:black">- A jellyfish (animal
                            without a brain) that follows/avoids the
                            sunlight.</span><o:p></o:p></p>
                      </div>
                      <div>
                        <p class="MsoNormal" style="background:#ECCA99"><span
                            style="color:black">- A cockroach (animal
                            with a brain) that avoids the sunlight.</span><o:p></o:p></p>
                      </div>
                      <div>
                        <p class="MsoNormal" style="background:#ECCA99"><span
                            style="color:black">- A drone with onboard
                            AI that flies to regions of more intense
                            sunlight to recharge its batteries.</span><o:p></o:p></p>
                      </div>
                      <div>
                        <p class="MsoNormal" style="background:#ECCA99"><span
                            style="color:black">- A human that dislikes
                            sunlight and actively avoids it.</span><o:p></o:p></p>
                      </div>
                      <div>
                        <p class="MsoNormal" style="background:#ECCA99"><o:p> </o:p></p>
                      </div>
                      <div>
                        <p class="MsoNormal" style="background:#ECCA99"><span
                            style="color:black">Can any of these, beside
                            the human, be said to be aware or conscious
                            of the sunlight, and why?</span><o:p></o:p></p>
                      </div>
                      <div>
                        <p class="MsoNormal" style="background:#ECCA99"><span
                            style="color:black">What is most relevant?
                            Being a biological life form, having a
                            brain, being able to make decisions based on
                            the environment? Being taxonomically close
                            to humans?</span><o:p></o:p></p>
                      </div>
                      <div>
                        <p class="MsoNormal" style="background:#ECCA99"><o:p> </o:p></p>
                      </div>
                      <div>
                        <p class="MsoNormal" style="background:#ECCA99"><o:p> </o:p></p>
                      </div>
                      <div>
                        <p class="MsoNormal" style="background:#ECCA99"><o:p> </o:p></p>
                      </div>
                      <div>
                        <p class="MsoNormal" style="background:#ECCA99"><o:p> </o:p></p>
                      </div>
                      <div>
                        <p class="MsoNormal" style="background:#ECCA99"><o:p> </o:p></p>
                      </div>
                      <div>
                        <p class="MsoNormal" style="background:#ECCA99"><o:p> </o:p></p>
                      </div>
                    </div>
                  </div>
                  <p class="MsoNormal" style="background:#ECCA99"><o:p> </o:p></p>
                  <div>
                    <div>
                      <p class="MsoNormal" style="background:#ECCA99"><span
                          style="color:black">On Mon, Feb 14, 2022 at
                          12:06 PM Gary Marcus <<a
                            href="mailto:gary.marcus@nyu.edu"
                            moz-do-not-send="true"
                            class="moz-txt-link-freetext">gary.marcus@nyu.edu</a>>
                          wrote:</span><o:p></o:p></p>
                    </div>
                    <blockquote style="border:none;border-left:solid
                      #CCCCCC 1.0pt;padding:0in 0in 0in
                      6.0pt;margin-left:4.8pt;margin-right:0in">
                      <p class="MsoNormal"
                        style="margin-bottom:12.0pt;background:#ECCA99"><span
                          style="color:black">Also true: Many AI
                          researchers are very unclear about what
                          consciousness is and also very sure that ELIZA
                          doesn’t have it.<br>
                          <br>
                          Neither ELIZA nor GPT-3 have<br>
                          - anything remotely related to embodiment<br>
                          - any capacity to reflect upon themselves<br>
                          <br>
                          Hypothesis: neither keyword matching nor
                          tensor manipulation, even at scale, suffice in
                          themselves to qualify for consciousness.<br>
                          <br>
                          - Gary<br>
                          <br>
                          > On Feb 14, 2022, at 00:24, Geoffrey
                          Hinton <<a
                            href="mailto:geoffrey.hinton@gmail.com"
                            target="_blank" moz-do-not-send="true"
                            class="moz-txt-link-freetext">geoffrey.hinton@gmail.com</a>>
                          wrote:<br>
                          > <br>
                          > Many AI researchers are very unclear
                          about what consciousness is and also very sure
                          that GPT-3 doesn’t have it. It’s a strange
                          combination.<br>
                          > <br>
                          > </span><o:p></o:p></p>
                    </blockquote>
                  </div>
                </blockquote>
                <div>
                  <p class="MsoNormal" style="background:#ECCA99"><span
                      style="color:black">--</span></p>
                </div>
              </div>
            </blockquote>
          </blockquote>
        </div>
      </div>
    </blockquote>
    <br>
  </body>
</html>