<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p><font face="Arial">Ali,</font></p>
<p><font face="Arial">certainly for many people identification with
"being like us" is important - this covers fertilized eggs and
embryos, but not orangutans. </font><font face="Arial">John
Locke wrote 300 years ago: "Consciousness is the perception of
what passes in a Man's own mind". Physical states and processes
that represent imagery, and the ability to create symbolic
narratives describing what goes on inside cognitive system,
should be the hallmark of consciousness. Of course more people
will accept it if we put it in a baby robot -:)</font></p>
<p><font face="Arial">This is why I prefer to focus on a simple
requirement: inner world and the ability to describe it. </font><br>
<font face="Arial"><font face="Arial">The road to create robots
that can feel has been described by Kevin O'Regan in the book:
<br>
</font>
</font></p>
<p>O’Regan, J.K. (2011). Why Red Doesn’t Sound Like a Bell:
Understanding the Feel of Consciousness. Oxford University Press,
USA.</p>
<p><font face="Arial">Inner worlds may be based on different
representations, not always deeply grounded in experience.
Binder made a step toward a brain-based semantics:<br>
</font></p>
Binder, J. R., Conant, L. L., Humphries, C. J., Fernandino, L.,
Simons, S. B., Aguilar, M., & Desai, R. H. (2016). Toward a
brain-based componential semantic representation. Cognitive
Neuropsychology, 33(3–4), 130–174.<br>
Fernandino, L., Tong, J.-Q., Conant, L. L., Humphries, C. J., &
Binder, J. R. (2022). Decoding the information structure underlying
the neural representation of concepts. PNAS 119(6). <br>
<font face="Arial"><br>
This does not solve the symbol grounding problem (Harnad, 1990),
but goes half the way, mimicking embodiment by decomposing
symbolic concepts into attributes that are relevant to the brain.
It should be sufficient to add human-like </font><font
face="Arial"><font face="Arial">semantics to bots</font>. As you
mention yourself, embodiment could be more abstract, and I can
imagine that a copy of a robot brain that has grounded its
representations in interactions with environment will endow a new
robot with similar experience. Can we simply implant it in the
network? <br>
</font>
<p>I wonder if absorption in abstract thinking can leave space for
the use of experientially grounded concepts. I used to focus on
group theory for hours and was not able to understand what was
said to me for brief moments. Was I not conscious? Or should we
consider continuous transition from abstract semantics to fully
embodied, human-like semantics in artificial systems? <br>
</p>
<p>Wlodek</p>
<p><br>
</p>
<div class="moz-cite-prefix">On 18/02/2022 16:36, Ali Minai wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CABG3s4u+dTT5VQH2i3h1G7xq=H92PV=_y9-4_pNqBuS03sqGSA@mail.gmail.com">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="ltr">
<div>Wlodek</div>
<div><br>
</div>
<div>I think that the debate about consciousness in the strong
sense of having a conscious experience like we do is sterile.
We will never have a measuring device for whether another
entity is "conscious", and at some point, we will get to an AI
that is sufficiently complex in its observable behavior that
we will either accept its inner state of consciousness on
trust - just as we do with humans and other animals - or admit
that we will never believe that a machine that is "not like
us" can ever be conscious. The "like us" part is more
important than many of us in the AI field think: A big part of
why we believe other humans and our dogs are conscious is
because we know that they are "like us", and assume that they
must share our capacity for inner conscious experience. We
already see this at a superficial level where, as ordinary
humans, we have a much easier time identifying with an
embodied, humanoid AI like Wall-E or the Terminator than with
a disembodied one like Hal or Skynet. This is also why so many
people find the Boston Dynamics "dog" so disconcerting.<br>
</div>
<div><br>
</div>
<div>The question of embodiment is a complex one, as you know,
of course, but I am with those who think that it is necessary
for grounding mental representations - that it is the only way
that the internal representations of the system are linked
directly to its experience. For example, if an AI system
trained only on text (like GPT-3) comes to learn that touching
something hot results in the fact of getting burned, we cannot
accept that as sufficient because it is based only on the
juxtaposition of abstractions, not the actual painful
experience of getting burned. For that, you need a body with
sensors and a brain with a state corresponding to pain -
something that can be done in an embodied robot. This is why I
think that all language systems trained purely on the
assumption of the distributional hypothesis of meaning will
remain superficial; they lack the grounding that can only be
supplied by experience. This does not mean that systems based
on the distributional hypothesis cannot learn a lot, or even
develop brain-like representations, as the following extremely
interesting paper shows:</div>
<div><br>
</div>
<div>Y. Zhang, K. Han, R. Worth, and Z. Liu. Connecting concepts
in the brain by mapping cortical representations of semantic
relations. Nature Communications, 11(1):1877, Apr 2020.</div>
<div><br>
In a formal sense, however, embodiment could be in any space,
including very abstract ones. We can think of text data as
GPT-3's world and, in that world, it is "embodied" and its
fundamentally distributional learning, though superficial and
lacking in experience to us, is grounded for it within its
world. Of course, this is not a very useful view of embodiment
and grounding since we want to create AI that is grounded in
our sense, but one of the most under-appreciated risks of AI
is that, as we develop systems that live in worlds very
different than ours, they will - implicitly and emergently -
embody values completely alien to us. The proverbial
loan-processing AI that learns to be racially biased in just a
caricature of this hazard, but one that should alert us to
deeper issues. Our quaintly positivistic and reductionistic
notion that we can deal with such things by removing biases
from data, algorithms, etc., is misplaced. The world is too
complicated for that.</div>
<div><br>
</div>
<div>Ali</div>
<div><br>
</div>
<div>
<div>
<div dir="ltr" class="gmail_signature"
data-smartmail="gmail_signature">
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div><b>Ali A. Minai, Ph.D.</b><br>
Professor and Graduate Program
Director<br>
Complex Adaptive Systems Lab<br>
Department of Electrical
Engineering & Computer Science<br>
</div>
<div>828 Rhodes Hall<br>
</div>
<div>University of Cincinnati<br>
Cincinnati, OH 45221-0030<br>
</div>
<div><br>
Phone: (513) 556-4783<br>
Fax: (513) 556-7326<br>
Email: <a
href="mailto:Ali.Minai@uc.edu"
target="_blank"
moz-do-not-send="true"
class="moz-txt-link-freetext">Ali.Minai@uc.edu</a><br>
<a
href="mailto:minaiaa@gmail.com"
target="_blank"
moz-do-not-send="true"
class="moz-txt-link-freetext">minaiaa@gmail.com</a><br>
<br>
WWW: <a
href="http://www.ece.uc.edu/%7Eaminai/"
target="_blank"
moz-do-not-send="true">https://eecs.ceas.uc.edu/~aminai/</a></div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<br>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Fri, Feb 18, 2022 at 7:27
AM Wlodzislaw Duch <<a href="mailto:wduch@umk.pl"
moz-do-not-send="true" class="moz-txt-link-freetext">wduch@umk.pl</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<p>Asim,<br>
</p>
<p>I was on the Anchorage panel, and asked others what could
be a great achievement in computational intelligence.
Steve Grossberg replied, that symbolic AI is meaningless,
but creation of artificial rat that could survive in
hostile environment would be something. Of course this is
still difficult, but perhaps DARPA autonomous machines are
not that far? <br>
</p>
<p>I also had similar discussions with Walter and support
his position: you cannot separate tightly coupled systems.
Any external influence will create activation in both,
linear causality looses its meaning. This is clear if both
systems adjust to each other. But even if only one system
learns (brain) and the other is mechanical but responds to
human actions it may behave as one system. Every musician
knows that: piano becomes a part of our body, responding
in so many ways to actions, not only by producing sounds
but also providing haptic feedback. <br>
</p>
<p>This simply means that brains of locked-in people worked
in somehow different way than brains of healthy people.
Why do we consider them conscious? Because they can
reflect on their mind states, imagine things and describe
their inner states. If GPT-3 was coupled with something
like DALL-E that creates images from text, and could
describe what they see in their inner world, create some
kind of episodic memory, we would have hard time to deny
that this thing is not conscious of what it has in its
mind. Embodiment helps to create inner world and changes
it, but it is not necessary for consciousness. Can we find
a good argument that such system is not conscious of its
own states? It may not have all qualities of human
consciousness, but that is a matter of more detailed
approximation of missing functions. <br>
</p>
<p> I have made this argument a long time ago (ex. in "<i
style="color:rgb(0,0,0);font-family:arial,helvetica,calibri;font-size:medium;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:left;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration-style:initial;text-decoration-color:initial"><a
moz-do-not-send="true">Brain-inspired conscious
computing architecture</a>" </i><span
style="color:rgb(0,0,0);font-family:arial,helvetica,calibri;font-size:medium;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:left;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration-style:initial;text-decoration-color:initial">written
over 20 years ago, see more papers on this on my web
page). </span><span
style="color:rgb(0,0,0);font-family:arial,helvetica,calibri;font-size:medium;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:left;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration-style:initial;text-decoration-color:initial"></span></p>
<p>Wlodek</p>
<p>Prof. Włodzisław Duch<br>
Fellow, International Neural Network Society<br>
Past President, European Neural Network Society<br>
Head, Neurocognitive Laboratory, CMIT NCU, Poland<br>
</p>
<p>Google: <a
href="http://www.google.com/search?q=Wlodek+Duch"
target="_blank" moz-do-not-send="true">Wlodzislaw Duch</a></p>
<div><br>
On 18/02/2022 05:22, Asim Roy wrote:<br>
</div>
<blockquote type="cite">
<div>
<p class="MsoNormal">In 1998, after our debate about the
brain at the WCCI in Anchorage, Alaska, I asked Walter
Freeman if he thought the brain controls the body. His
answer was, you can also say that the body controls
the brain. I then asked him if the driver controls a
car, or the pilot controls an airplane. His answer was
the same, that you can also say that the car controls
the driver, or the plane controls the pilot. I then
realized that Walter was also a philosopher and
believed in the No-free Will theory and what he was
arguing for is that the world is simply made of
interacting systems. However, both Walter, and his
close friend John Taylor, were into consciousness. </p>
<p class="MsoNormal"> </p>
<p class="MsoNormal">I have argued with Walter on many
different topics over nearly two decades and have
utmost respect for him as a scholar, but this first
argument I will always remember.</p>
<p class="MsoNormal"> </p>
<p class="MsoNormal">Obviously, there’s a conflict
between consciousness and the No-free Will theory.
Wonder where we stand with regard to this conflict.</p>
<p class="MsoNormal"> </p>
<p class="MsoNormal">Asim Roy</p>
<p class="MsoNormal">Professor, Information Systems</p>
<p class="MsoNormal">Arizona State University</p>
<p class="MsoNormal"><a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e="
target="_blank" moz-do-not-send="true">Lifeboat
Foundation Bios: Professor Asim Roy</a></p>
<p class="MsoNormal"><a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e="
target="_blank" moz-do-not-send="true">Asim Roy |
iSearch (asu.edu)</a></p>
<p class="MsoNormal"> </p>
<p class="MsoNormal"> </p>
<div>
<div style="border-color:rgb(225,225,225) currentcolor
currentcolor;border-style:solid none
none;border-width:1pt medium medium;padding:3pt 0in
0in">
<p class="MsoNormal"><b>From:</b> Connectionists <a
href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu"
target="_blank" moz-do-not-send="true"><connectionists-bounces@mailman.srv.cs.cmu.edu></a>
<b>On Behalf Of </b>Andras Lorincz<br>
<b>Sent:</b> Tuesday, February 15, 2022 6:50 AM<br>
<b>To:</b> Stephen José Hanson <a
href="mailto:jose@rubic.rutgers.edu"
target="_blank" moz-do-not-send="true"><jose@rubic.rutgers.edu></a>;
Gary Marcus <a href="mailto:gary.marcus@nyu.edu"
target="_blank" moz-do-not-send="true"><gary.marcus@nyu.edu></a><br>
<b>Cc:</b> Connectionists <a
href="mailto:Connectionists@cs.cmu.edu"
target="_blank" moz-do-not-send="true"><Connectionists@cs.cmu.edu></a><br>
<b>Subject:</b> Re: Connectionists: Weird beliefs
about consciousness</p>
</div>
</div>
<p class="MsoNormal"> </p>
<div>
<p class="MsoNormal"><span
style="font-size:12pt;color:black">Dear Steve and
Gary:</span></p>
</div>
<div>
<div>
<p class="MsoNormal"><span
style="font-size:12pt;color:black">This is how I
see (try to understand) consciousness and the
related terms: </span></p>
<div>
<p class="MsoNormal"><span
style="font-size:12pt;color:black">(Our)
consciousness seems to be related to the
close-to-deterministic nature of the episodes
on from few hundred millisecond to a few
second domain. Control instructions may leave
our brain 200 ms earlier than the action
starts and they become conscious only by that
time. In addition, observations of those may
also be delayed by a similar amount. (It then
follows that the launching of the control
actions is not conscious and -- therefore --
free will can be debated in this very limited
context.) On the other hand, model-based
synchronization is necessary for timely
observation, planning, decision making, and
execution in a distributed and slow
computational system. If this model-based
synchronization is not working properly, then
the observation of the world breaks and
schizophrenic symptoms appear. As an example,
individuals with pronounced schizotypal traits
are particularly successful in self-tickling
(source: <a
href="https://urldefense.com/v3/__https:/philpapers.org/rec/LEMIWP__;!!IKRxdwAv5BmarQ!P1ufmU5XnzpvjxtS2M0AnytlX24RNsoDeNPfsqUNWbF6OU5p9xMqtMj9S3Pn3cY$"
target="_blank" moz-do-not-send="true">
https://philpapers.org/rec/LEMIWP</a>, and a
discussion on Asperger and schizophrenia: <a
href="https://urldefense.com/v3/__https:/www.frontiersin.org/articles/10.3389/fpsyt.2020.503462/full__;!!IKRxdwAv5BmarQ!P1ufmU5XnzpvjxtS2M0AnytlX24RNsoDeNPfsqUNWbF6OU5p9xMqtMj9l5NkQt4$"
target="_blank" moz-do-not-send="true">
https://www.frontiersin.org/articles/10.3389/fpsyt.2020.503462/full</a>)
a manifestation of improper binding. The
internal model enables and the synchronization
requires the internal model and thus a certain
level of consciousness can appear in a time
interval around the actual time instant and
its length depends on the short-term memory.</span></p>
</div>
<p class="MsoNormal"><span
style="font-size:12pt;color:black">Other issues,
like separating the self from the rest of the
world are more closely related to the soft/hard
style interventions (as called in the recent
deep learning literature), i.e., those
components (features) that can be
modified/controlled, e.g., color and speed, and
the ones that are Lego-like and can be
separated/amputed/occluded/added.</span></p>
</div>
<div>
<p class="MsoNormal"><span
style="font-size:12pt;color:black">Best,</span></p>
</div>
<div>
<p class="MsoNormal"><span
style="font-size:12pt;color:black">Andras</span></p>
</div>
<div id="gmail-m_1420086470503611017Signature">
<div>
<div
id="gmail-m_1420086470503611017divtagdefaultwrapper">
<div
id="gmail-m_1420086470503611017divtagdefaultwrapper">
<div name="divtagdefaultwrapper">
<div>
<div>
<div>
<div>
<p><span style="color:black"> </span></p>
<p><span style="color:black">------------------------------------</span></p>
<div>
<p class="MsoNormal"><span
style="font-size:12pt;color:black">Andras
Lorincz</span></p>
</div>
<div>
<p class="MsoNormal"><span
style="font-size:12pt;color:black"><a
href="https://urldefense.com/v3/__http:/nipg.inf.elte.hu/__;!!IKRxdwAv5BmarQ!P1ufmU5XnzpvjxtS2M0AnytlX24RNsoDeNPfsqUNWbF6OU5p9xMqtMj9j2LbdH0$"
target="_blank"
moz-do-not-send="true">http://nipg.inf.elte.hu/</a></span></p>
</div>
<div>
<p class="MsoNormal"><span
style="font-size:12pt;color:black">Fellow
of the European Association
for Artificial Intelligence</span></p>
</div>
<div>
<p class="MsoNormal"><span
style="font-size:12pt;color:black"><a
href="https://urldefense.com/v3/__https:/scholar.google.com/citations?user=EjETXQkAAAAJ&hl=en__;!!IKRxdwAv5BmarQ!P1ufmU5XnzpvjxtS2M0AnytlX24RNsoDeNPfsqUNWbF6OU5p9xMqtMj99i1VRm0$"
target="_blank"
moz-do-not-send="true">https://scholar.google.com/citations?user=EjETXQkAAAAJ&hl=en</a></span></p>
</div>
<div>
<p class="MsoNormal"><span
style="font-size:12pt;color:black">Department
of Artificial Intelligence</span></p>
</div>
<div>
<p class="MsoNormal"><span
style="font-size:12pt;color:black">Faculty
of Informatics</span></p>
</div>
<div>
<p class="MsoNormal"><span
style="font-size:12pt;color:black">Eotvos
Lorand University</span></p>
</div>
<p class="MsoNormal"><span
style="font-size:12pt;color:black">Budapest,
Hungary </span></p>
<p><span style="color:black"> </span></p>
<p><span style="color:black"> </span></p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div>
<p class="MsoNormal"><span
style="font-size:12pt;color:black"> </span></p>
</div>
<div class="MsoNormal" style="text-align:center"
align="center">
<hr width="98%" size="2" align="center"> </div>
<div id="gmail-m_1420086470503611017divRplyFwdMsg">
<p class="MsoNormal"><b><span style="color:black">From:</span></b><span
style="color:black"> Connectionists <<a
href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu"
target="_blank" moz-do-not-send="true"
class="moz-txt-link-freetext">connectionists-bounces@mailman.srv.cs.cmu.edu</a>>
on behalf of Stephen José Hanson <<a
href="mailto:jose@rubic.rutgers.edu"
target="_blank" moz-do-not-send="true"
class="moz-txt-link-freetext">jose@rubic.rutgers.edu</a>><br>
<b>Sent:</b> Monday, February 14, 2022 8:30 PM<br>
<b>To:</b> Gary Marcus <<a
href="mailto:gary.marcus@nyu.edu"
target="_blank" moz-do-not-send="true"
class="moz-txt-link-freetext">gary.marcus@nyu.edu</a>><br>
<b>Cc:</b> Connectionists <<a
href="mailto:connectionists@cs.cmu.edu"
target="_blank" moz-do-not-send="true"
class="moz-txt-link-freetext">connectionists@cs.cmu.edu</a>><br>
<b>Subject:</b> Re: Connectionists: Weird beliefs
about consciousness</span> </p>
<div>
<p class="MsoNormal"> </p>
</div>
</div>
<div>
<p style="background:rgb(236,202,153) none repeat
scroll 0% 0%"><span
style="font-size:13.5pt;color:black">Gary, these
weren't criterion. Let me try again.</span></p>
<p style="background:rgb(236,202,153) none repeat
scroll 0% 0%"><span
style="font-size:13.5pt;color:black">I wasn't
talking about wake-sleep cycles... I was talking
about being awake or asleep and the transition
that ensues..</span></p>
<p style="background:rgb(236,202,153) none repeat
scroll 0% 0%"><span
style="font-size:13.5pt;color:black">Rooba's don't
sleep.. they turn off, I have two of them. They
turn on once (1) their batteries are recharged (2)
a timer has been set for being turned on.</span></p>
<p style="background:rgb(236,202,153) none repeat
scroll 0% 0%"><span
style="font-size:13.5pt;color:black">GPT3 is
essentially a CYC that actually works.. by reading
Wikipedia (which of course is a terribly biased
sample).</span></p>
<p style="background:rgb(236,202,153) none repeat
scroll 0% 0%"><span
style="font-size:13.5pt;color:black">I was
indicating the difference between implicit and
explicit learning/problem solving. Implicit
learning/memory is unconscious and similar to a
habit.. (good or bad).</span></p>
<p style="background:rgb(236,202,153) none repeat
scroll 0% 0%"><span
style="font-size:13.5pt;color:black">I believe
that when someone says "is gpt3 conscious?" they
are asking: is gpt3 self-aware? Roombas know
about vacuuming and they are unconscious.</span></p>
<p style="background:rgb(236,202,153) none repeat
scroll 0% 0%"><span
style="font-size:13.5pt;color:black">S</span></p>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153) none repeat
scroll 0% 0%"><span style="color:black">On 2/14/22
12:45 PM, Gary Marcus wrote:</span></p>
</div>
<blockquote style="margin-top:5pt;margin-bottom:5pt">
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153) none repeat
scroll 0% 0%"><span style="color:black">Stephen,</span></p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153) none repeat
scroll 0% 0%"> </p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153) none repeat
scroll 0% 0%"><span style="color:black">On
criteria (1)-(3), a high-end,
mapping-equippped Roomba is far more plausible
as a consciousness than GPT-3.</span></p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153) none repeat
scroll 0% 0%"> </p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153) none repeat
scroll 0% 0%"><span style="color:black">1. The
Roomba has a clearly defined wake-sleep cycle;
GPT does not.</span></p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153) none repeat
scroll 0% 0%"><span style="color:black">2.
Roomba makes choices based on an explicit
representation of its location relative to a
mapped space. GPT lacks any consistent
reflection of self; eg if you ask it, as I
have, if you are you person, and then ask if
it is a computer, it’s liable to say yes to
both, showing no stable knowledge of self.</span></p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153) none repeat
scroll 0% 0%"><span style="color:black">3.
Roomba has explicit, declarative knowledge eg
of walls and other boundaries, as well its own
location. GPT has no systematically
interrogable explicit representations.</span></p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153) none repeat
scroll 0% 0%"> </p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153) none repeat
scroll 0% 0%"><span style="color:black">All this
is said with tongue lodged partway in cheek,
but I honestly don’t see what criterion would
lead anyone to believe that GPT is a more
plausible candidate for consciousness than any
other AI program out there. </span></p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153) none repeat
scroll 0% 0%"> </p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153) none repeat
scroll 0% 0%"><span style="color:black">ELIZA
long ago showed that you could produce fluent
speech that was mildly contextually relevant,
and even convincing to the untutored; just
because GPT is a better version of that trick
doesn’t mean it’s any more conscious.</span></p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153) none repeat
scroll 0% 0%"> </p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153) none repeat
scroll 0% 0%"><span style="color:black">Gary</span></p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153) none repeat
scroll 0% 0%"><span style="color:black"><br>
<br>
</span></p>
<blockquote
style="margin-top:5pt;margin-bottom:5pt">
<p class="MsoNormal"
style="margin-bottom:12pt;background:rgb(236,202,153)
none repeat scroll 0% 0%"><span
style="color:black">On Feb 14, 2022, at
08:56, Stephen José Hanson <a
href="mailto:jose@rubic.rutgers.edu"
target="_blank" moz-do-not-send="true"><jose@rubic.rutgers.edu></a>
wrote:</span></p>
</blockquote>
</div>
<blockquote style="margin-top:5pt;margin-bottom:5pt">
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153) none repeat
scroll 0% 0%"><span style="color:black"> </span>
</p>
<p style="background:rgb(236,202,153) none
repeat scroll 0% 0%"><span
style="font-size:13.5pt;color:black">this is
a great list of behavior.. </span></p>
<p style="background:rgb(236,202,153) none
repeat scroll 0% 0%"><span
style="font-size:13.5pt;color:black">Some
biologically might be termed reflexive,
taxes, classically conditioned, implicit
(memory/learning)... all however would not
be<br>
conscious in the several senses: (1)
wakefulness-- sleep (2) self aware (3)
explicit/declarative.</span></p>
<p style="background:rgb(236,202,153) none
repeat scroll 0% 0%"><span
style="font-size:13.5pt;color:black">I think
the term is used very loosely, and I believe
what GPT3 and other AI are hoping to show
signs of is "self-awareness"..</span></p>
<p style="background:rgb(236,202,153) none
repeat scroll 0% 0%"><span
style="font-size:13.5pt;color:black">In
response to : "why are you doing that?",
"What are you doing now", "what will you be
doing in 2030?"</span></p>
<p style="background:rgb(236,202,153) none
repeat scroll 0% 0%"><span
style="font-size:13.5pt;color:black">Steve</span></p>
<p style="background:rgb(236,202,153) none
repeat scroll 0% 0%"> </p>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153) none
repeat scroll 0% 0%"><span
style="color:black">On 2/14/22 10:46 AM,
Iam Palatnik wrote:</span></p>
</div>
<blockquote
style="margin-top:5pt;margin-bottom:5pt">
<div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153) none
repeat scroll 0% 0%"><span
style="color:black">A somewhat related
question, just out of curiosity.</span></p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153) none
repeat scroll 0% 0%"> </p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153) none
repeat scroll 0% 0%"><span
style="color:black">Imagine the
following:</span></p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153) none
repeat scroll 0% 0%"> </p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153) none
repeat scroll 0% 0%"><span
style="color:black">- An automatic
solar panel that tracks the position
of the sun.</span></p>
</div>
<div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153)
none repeat scroll 0% 0%"><span
style="color:black">- A group of
single celled microbes with
phototaxis that follow the sunlight.</span></p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153)
none repeat scroll 0% 0%"><span
style="color:black">- A jellyfish
(animal without a brain) that
follows/avoids the sunlight.</span></p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153)
none repeat scroll 0% 0%"><span
style="color:black">- A cockroach
(animal with a brain) that avoids
the sunlight.</span></p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153)
none repeat scroll 0% 0%"><span
style="color:black">- A drone with
onboard AI that flies to regions of
more intense sunlight to recharge
its batteries.</span></p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153)
none repeat scroll 0% 0%"><span
style="color:black">- A human that
dislikes sunlight and actively
avoids it.</span></p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153)
none repeat scroll 0% 0%"> </p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153)
none repeat scroll 0% 0%"><span
style="color:black">Can any of
these, beside the human, be said to
be aware or conscious of the
sunlight, and why?</span></p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153)
none repeat scroll 0% 0%"><span
style="color:black">What is most
relevant? Being a biological life
form, having a brain, being able to
make decisions based on the
environment? Being taxonomically
close to humans?</span></p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153)
none repeat scroll 0% 0%"> </p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153)
none repeat scroll 0% 0%"> </p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153)
none repeat scroll 0% 0%"> </p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153)
none repeat scroll 0% 0%"> </p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153)
none repeat scroll 0% 0%"> </p>
</div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153)
none repeat scroll 0% 0%"> </p>
</div>
</div>
</div>
<p class="MsoNormal"
style="background:rgb(236,202,153) none
repeat scroll 0% 0%"> </p>
<div>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153) none
repeat scroll 0% 0%"><span
style="color:black">On Mon, Feb 14,
2022 at 12:06 PM Gary Marcus <<a
href="mailto:gary.marcus@nyu.edu"
target="_blank"
moz-do-not-send="true"
class="moz-txt-link-freetext">gary.marcus@nyu.edu</a>>
wrote:</span></p>
</div>
<blockquote style="border-color:currentcolor
currentcolor currentcolor
rgb(204,204,204);border-style:none none
none solid;border-width:medium medium
medium 1pt;padding:0in 0in 0in
6pt;margin-left:4.8pt;margin-right:0in">
<p class="MsoNormal"
style="margin-bottom:12pt;background:rgb(236,202,153)
none repeat scroll 0% 0%"><span
style="color:black">Also true: Many AI
researchers are very unclear about
what consciousness is and also very
sure that ELIZA doesn’t have it.<br>
<br>
Neither ELIZA nor GPT-3 have<br>
- anything remotely related to
embodiment<br>
- any capacity to reflect upon
themselves<br>
<br>
Hypothesis: neither keyword matching
nor tensor manipulation, even at
scale, suffice in themselves to
qualify for consciousness.<br>
<br>
- Gary<br>
<br>
> On Feb 14, 2022, at 00:24,
Geoffrey Hinton <<a
href="mailto:geoffrey.hinton@gmail.com"
target="_blank"
moz-do-not-send="true"
class="moz-txt-link-freetext">geoffrey.hinton@gmail.com</a>>
wrote:<br>
> <br>
> Many AI researchers are very
unclear about what consciousness is
and also very sure that GPT-3 doesn’t
have it. It’s a strange combination.<br>
> <br>
> </span></p>
</blockquote>
</div>
</blockquote>
<div>
<p class="MsoNormal"
style="background:rgb(236,202,153) none
repeat scroll 0% 0%"><span
style="color:black">--</span></p>
</div>
</div>
</blockquote>
</blockquote>
</div>
</div>
</blockquote>
<br>
</div>
</blockquote>
</div>
</blockquote>
<div class="moz-signature">-- <br>
Prof. Włodzisław Duch
<br>
Fellow, International Neural Network Society
<br>
Past President, European Neural Network Society
<br>
Head, Neurocognitive Laboratory, CMIT NCU, Poland
<br>
Google: <a href="http://www.google.com/search?q=Wlodek+Duch">Wlodzislaw
Duch</a></div>
</body>
</html>