Connectionists: Weird beliefs about consciousness

Tsvi Achler achler at gmail.com
Mon Feb 14 18:26:56 EST 2022


A huge part of the problem in any discussion about consciousness is there
isn't even a clear definition of consciousness for humans (and animals), as
consciousness is perceptual and not possible to measure externally.
Moreover it is possible to do complex tasks without being conscious, e.g.
have conversations, go (e.g. drive) from one place to another without
conscious awareness if you are: tired, sick, on medications that will keep
you awake and responsive but have you forget the surgery, bump your head,
etc...
So consciousness is not necessary or sufficient for complex thoughts or
behavior.
Subsequently consciousness may be a behavioral mechanism for self
preservation labeling *My* memories and experience gathering is important
thus I must preserve myself and not do things I know may put an end to
them..
In other words a figment to behaviorally motivate us to focus on our
survival.

The situation becomes worse when applying to algorithms something that is
not well defined in the first place.  That is why I stay away.
But that is not to say there are not huge differences between organisms and
current AI.  Organisms can learn and update without catastrophic
interference and forgetting.  They dont require the methods currently
needed to store all bits of information and present-rehearse it in fixed
frequencies and random order (iid) whenever something new is to be
learned.  Most natural environments are not iid so online learning does not
apply either.

There is also a huge amount of feedback back to the inputs that has been
hypothesized to be associated with consciousness.  I think the secret to
organism-like flexibility lies in those connections, but the association of
these connections with consciousness (as opposed to recognition) is hugely
misleading, not the least because of the reasons above.

-Tsvi

On Mon, Feb 14, 2022 at 8:20 AM Iam Palatnik <iam.palat at gmail.com> wrote:

> A somewhat related question, just out of curiosity.
>
> Imagine the following:
>
> - An automatic solar panel that tracks the position of the sun.
> - A group of single celled microbes with phototaxis that follow the
> sunlight.
> - A jellyfish (animal without a brain) that follows/avoids the sunlight.
> - A cockroach (animal with a brain) that avoids the sunlight.
> - A drone with onboard AI that flies to regions of more intense sunlight
> to recharge its batteries.
> - A human that dislikes sunlight and actively avoids it.
>
> Can any of these, beside the human, be said to be aware or conscious of
> the sunlight, and why?
> What is most relevant? Being a biological life form, having a brain, being
> able to make decisions based on the environment? Being taxonomically close
> to humans?
>
>
>
>
>
>
>
> On Mon, Feb 14, 2022 at 12:06 PM Gary Marcus <gary.marcus at nyu.edu> wrote:
>
>> Also true: Many AI researchers are very unclear about what consciousness
>> is and also very sure that ELIZA doesn’t have it.
>>
>> Neither ELIZA nor GPT-3 have
>> - anything remotely related to embodiment
>> - any capacity to reflect upon themselves
>>
>> Hypothesis: neither keyword matching nor tensor manipulation, even at
>> scale, suffice in themselves to qualify for consciousness.
>>
>> - Gary
>>
>> > On Feb 14, 2022, at 00:24, Geoffrey Hinton <geoffrey.hinton at gmail.com>
>> wrote:
>> >
>> > Many AI researchers are very unclear about what consciousness is and
>> also very sure that GPT-3 doesn’t have it. It’s a strange combination.
>> >
>> >
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220214/3214ab2a/attachment.html>


More information about the Connectionists mailing list