Connectionists: Weird beliefs about consciousness

Ali Minai minaiaa at gmail.com
Tue Feb 15 09:58:07 EST 2022


I agree with what Gary is saying. However, I do have a further question.
How will we ever know whether even a fully embodied, high performing
intelligent machine that can “explain” its decisions has the capacity for
actual self-reflection? How can we be sure that other humans have the
capacity for self-reflection, beyond the fact that they are like us, give
us understandable reports of their inner thinking, and tickle our mirror
system in the right way?

This is not to deny that humans can do self-reflection; it is about whether
we have an objective method to know that another species not like us is
doing it. At some point in what I think will be a fairly distant future, an
nth- generation, descendent of GPT-3 may be so sophisticated that the
question will become moot. That is the inevitable consequence of a
non-dualistic view of intelligence that we all share, and will come to pass
unless the current narrow application-driven view of AI derails the entire
project.

I do think that that that system will have a very different neural
architecture much closer to the brain’s, will be embodied, will not need to
learn using billions of data points and millions of gradient-descent
iterations, and will be capable of actual mental growth (not just learning
more stuff but learning new modes of understanding) over its lifetime. It
will also not be able to explain its inner thoughts perfectly, nor feel the
need to do so because it will not be our obedient servant. In other words,
it will not be anything like GPT-3.

Ali

On Tue, Feb 15, 2022 at 2:11 AM Gary Marcus <gary.marcus at nyu.edu> wrote:

> Stephen,
>
> On criteria (1)-(3), a high-end, mapping-equippped Roomba is far more
> plausible as a consciousness than GPT-3.
>
> 1. The Roomba has a clearly defined wake-sleep cycle; GPT does not.
> 2. Roomba makes choices based on an explicit representation of its
> location relative to a mapped space. GPT lacks any consistent reflection of
> self; eg if you ask it, as I have, if you are you person, and then ask if
> it is a computer, it’s liable to say yes to both, showing no stable
> knowledge of self.
> 3. Roomba has explicit, declarative knowledge eg of walls and other
> boundaries, as well its own location. GPT has no systematically
> interrogable explicit representations.
>
> All this is said with tongue lodged partway in cheek, but I honestly don’t
> see what criterion would lead anyone to believe that GPT is a more
> plausible candidate for consciousness than any other AI program out there.
>
> ELIZA long ago showed that you could produce fluent speech that was mildly
> contextually relevant, and even convincing to the untutored; just because
> GPT is a better version of that trick doesn’t mean it’s any more conscious.
>
> Gary
>
> On Feb 14, 2022, at 08:56, Stephen José Hanson <jose at rubic.rutgers.edu>
> wrote:
>
> 
>
> this is a great list of behavior..
>
> Some biologically might be termed reflexive, taxes, classically
> conditioned, implicit (memory/learning)... all however would not be
> conscious in the several senses:  (1)  wakefulness-- sleep  (2) self aware
> (3) explicit/declarative.
>
> I think the term is used very loosely, and I believe what GPT3 and other
> AI are hoping to show signs of is "self-awareness"..
>
> In response to :  "why are you doing that?",  "What are you doing now",
> "what will you be doing in 2030?"
>
> Steve
>
>
> On 2/14/22 10:46 AM, Iam Palatnik wrote:
>
> A somewhat related question, just out of curiosity.
>
> Imagine the following:
>
> - An automatic solar panel that tracks the position of the sun.
> - A group of single celled microbes with phototaxis that follow the
> sunlight.
> - A jellyfish (animal without a brain) that follows/avoids the sunlight.
> - A cockroach (animal with a brain) that avoids the sunlight.
> - A drone with onboard AI that flies to regions of more intense sunlight
> to recharge its batteries.
> - A human that dislikes sunlight and actively avoids it.
>
> Can any of these, beside the human, be said to be aware or conscious of
> the sunlight, and why?
> What is most relevant? Being a biological life form, having a brain, being
> able to make decisions based on the environment? Being taxonomically close
> to humans?
>
>
>
>
>
>
>
> On Mon, Feb 14, 2022 at 12:06 PM Gary Marcus <gary.marcus at nyu.edu> wrote:
>
>> Also true: Many AI researchers are very unclear about what consciousness
>> is and also very sure that ELIZA doesn’t have it.
>>
>> Neither ELIZA nor GPT-3 have
>> - anything remotely related to embodiment
>> - any capacity to reflect upon themselves
>>
>> Hypothesis: neither keyword matching nor tensor manipulation, even at
>> scale, suffice in themselves to qualify for consciousness.
>>
>> - Gary
>>
>> > On Feb 14, 2022, at 00:24, Geoffrey Hinton <geoffrey.hinton at gmail.com>
>> wrote:
>> >
>> > Many AI researchers are very unclear about what consciousness is and
>> also very sure that GPT-3 doesn’t have it. It’s a strange combination.
>> >
>> >
>>
>
>> --
>
> --
*Ali A. Minai, Ph.D.*
Professor and Graduate Program Director
Complex Adaptive Systems Lab
Department of Electrical Engineering & Computer Science
828 Rhodes Hall
University of Cincinnati
Cincinnati, OH 45221-0030

Past-President (2015-2016)
International Neural Network Society

Phone: (513) 556-4783
Fax: (513) 556-7326
Email: Ali.Minai at uc.edu
          minaiaa at gmail.com

WWW: https://eecs.ceas.uc.edu/~aminai/ <http://www.ece.uc.edu/%7Eaminai/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220215/f467c3d9/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.png
Type: image/png
Size: 19957 bytes
Desc: not available
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220215/f467c3d9/attachment.png>


More information about the Connectionists mailing list