<div dir="ltr"><div>Stephen,</div><div><br></div><div>It's curious to me that wake-sleep cycles should be included in the notion of consciousness, in the sense that I see no problems with a conscious creature that does not sleep. Could you tell me a little more about your thinking here?</div><div><br></div><div>Thanks,<br></div><div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div style="font-family:arial;font-size:small"><br></div><div style="font-family:arial;font-size:small">Adam R. Kosiorek<br></div></div></div></div></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, 15 Feb 2022 at 07:12, Stephen José Hanson <<a href="mailto:jose@rubic.rutgers.edu">jose@rubic.rutgers.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#ecca99">
<p><font size="+1">Gary, these weren't criterion. Let me try
again.<br>
</font></p>
<p><font size="+1">I wasn't talking about wake-sleep cycles... I was
talking about being awake or asleep and the transition that
ensues..</font></p>
<p><font size="+1">Rooba's don't sleep.. they turn off, I have two
of them. They turn on once (1) their batteries are recharged
(2) a timer has been set for being turned on.</font></p>
<p><font size="+1">GPT3 is essentially a CYC that actually works..
by reading Wikipedia (which of course is a terribly biased
sample).</font></p>
<p><font size="+1">I was indicating the difference between implicit
and explicit learning/problem solving. Implicit
learning/memory is unconscious and similar to a habit.. (good or
bad).</font></p>
<p><font size="+1">I believe that when someone says "is gpt3
conscious?" they are asking: is gpt3 self-aware? Roombas
know about vacuuming and they are unconscious.</font></p>
<p><font size="+1">S<br>
</font></p>
<div>On 2/14/22 12:45 PM, Gary Marcus wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Stephen,</div>
<div dir="ltr"><br>
</div>
<div dir="ltr">On criteria (1)-(3), a high-end, mapping-equippped
Roomba is far more plausible as a consciousness than GPT-3.</div>
<div dir="ltr"><br>
</div>
<div dir="ltr">1. The Roomba has a clearly defined wake-sleep
cycle; GPT does not.</div>
<div dir="ltr">2. Roomba makes choices based on an explicit
representation of its location relative to a mapped space. GPT
lacks any consistent reflection of self; eg if you ask it, as I
have, if you are you person, and then ask if it is a computer,
it’s liable to say yes to both, showing no stable knowledge of
self.</div>
<div dir="ltr">3. Roomba has explicit, declarative knowledge eg of
walls and other boundaries, as well its own location. GPT has no
systematically interrogable explicit representations.</div>
<div dir="ltr"><br>
</div>
<div dir="ltr">All this is said with tongue lodged partway in
cheek, but I honestly don’t see what criterion would lead anyone
to believe that GPT is a more plausible candidate for
consciousness than any other AI program out there. </div>
<div dir="ltr"><br>
</div>
<div dir="ltr">ELIZA long ago showed that you could produce fluent
speech that was mildly contextually relevant, and even
convincing to the untutored; just because GPT is a better
version of that trick doesn’t mean it’s any more conscious.</div>
<div dir="ltr"><br>
</div>
<div dir="ltr">Gary</div>
<div dir="ltr"><br>
<blockquote type="cite">On Feb 14, 2022, at 08:56, Stephen José
Hanson <a href="mailto:jose@rubic.rutgers.edu" target="_blank"><jose@rubic.rutgers.edu></a> wrote:<br>
<br>
</blockquote>
</div>
<blockquote type="cite">
<div dir="ltr">
<p><font size="+1">this is a great list of behavior.. <br>
</font></p>
<p><font size="+1">Some biologically might be termed
reflexive, taxes, classically conditioned, implicit
(memory/learning)... all however would not be<br>
conscious in the several senses: (1) wakefulness--
sleep (2) self aware (3) explicit/declarative.</font></p>
<p><font size="+1">I think the term is used very loosely, and
I believe what GPT3 and other AI are hoping to show signs
of is "self-awareness"..</font></p>
<p><font size="+1">In response to : "why are you doing
that?", "What are you doing now", "what will you be doing
in 2030?"</font></p>
<p><font size="+1">Steve<br>
</font></p>
<p><font size="+1"><br>
</font></p>
<div>On 2/14/22 10:46 AM, Iam Palatnik
wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div>A somewhat related question, just out of curiosity.</div>
<div><br>
</div>
<div>Imagine the following:</div>
<div><br>
</div>
<div>- An automatic solar panel that tracks the position
of the sun.<br>
</div>
<div>
<div>- A group of single celled microbes with phototaxis
that follow the sunlight.</div>
<div>- A jellyfish (animal without a brain) that
follows/avoids the sunlight.</div>
<div> - A cockroach (animal with a brain) that avoids
the sunlight.</div>
<div>- A drone with onboard AI that flies to regions of
more intense sunlight to recharge its batteries.<br>
</div>
<div>- A human that dislikes sunlight and actively
avoids it.<br>
</div>
<div><br>
</div>
<div>Can any of these, beside the human, be said to be
aware or conscious of the sunlight, and why?</div>
<div>What is most relevant? Being a biological life
form, having a brain, being able to make decisions
based on the environment? Being taxonomically close to
humans?<br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Mon, Feb 14, 2022 at
12:06 PM Gary Marcus <<a href="mailto:gary.marcus@nyu.edu" target="_blank">gary.marcus@nyu.edu</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Also true: Many AI
researchers are very unclear about what consciousness is
and also very sure that ELIZA doesn’t have it.<br>
<br>
Neither ELIZA nor GPT-3 have<br>
- anything remotely related to embodiment<br>
- any capacity to reflect upon themselves<br>
<br>
Hypothesis: neither keyword matching nor tensor
manipulation, even at scale, suffice in themselves to
qualify for consciousness.<br>
<br>
- Gary<br>
<br>
> On Feb 14, 2022, at 00:24, Geoffrey Hinton <<a href="mailto:geoffrey.hinton@gmail.com" target="_blank">geoffrey.hinton@gmail.com</a>>
wrote:<br>
> <br>
> Many AI researchers are very unclear about what
consciousness is and also very sure that GPT-3 doesn’t
have it. It’s a strange combination.<br>
> <br>
> <br>
<br>
</blockquote>
</div>
</blockquote>
<div>-- <br>
<img src="cid:17efcf1c59d61a917f31" border="0"></div>
</div>
</blockquote>
</blockquote>
<div>-- <br>
<img src="cid:17efcf1c59d61a917f31" border="0"></div>
</div>
</blockquote></div>