Connectionists: Weird beliefs about consciousness

Ali Minai minaiaa at gmail.com
Fri Feb 18 02:47:16 EST 2022


Jerry Coyne recently did a very nice post on an essay by Massimo Pigliucci
on the topic of free-will. Also links to the remarkable recent paper by
Maoz et al on the difference between random and deliberate choice and its
relationship to readiness potential experiments.

https://whyevolutionistrue.com/2022/02/09/massimo-pigliucci-free-will-is-incoherent/?fbclid=IwAR1-U_Z34z3-3Uu0L7siMTzyj6Q0v7ey77Gho40V9OW618cPRzEZhR13qSk

I have to agree with Pigliucci and Coyne: Free-will in the conventional
sense is logically incoherent. However, a lot probably remains to be
discovered about volition.

Ali


*Ali A. Minai, Ph.D.*
Professor and Graduate Program Director
Complex Adaptive Systems Lab
Department of Electrical Engineering & Computer Science
828 Rhodes Hall
University of Cincinnati
Cincinnati, OH 45221-0030

Phone: (513) 556-4783
Fax: (513) 556-7326
Email: Ali.Minai at uc.edu
          minaiaa at gmail.com

WWW: https://eecs.ceas.uc.edu/~aminai/ <http://www.ece.uc.edu/%7Eaminai/>


On Fri, Feb 18, 2022 at 2:20 AM Asim Roy <ASIM.ROY at asu.edu> wrote:

> In 1998, after our debate about the brain at the WCCI in Anchorage,
> Alaska, I asked Walter Freeman if he thought the brain controls the body.
> His answer was, you can also say that the body controls the brain. I then
> asked him if the driver controls a car, or the pilot controls an airplane.
> His answer was the same, that you can also say that the car controls the
> driver, or the plane controls the pilot. I then realized that Walter was
> also a philosopher and believed in the No-free Will theory and what he was
> arguing for is that the world is simply made of interacting systems.
> However, both Walter, and his close friend John Taylor, were into
> consciousness.
>
>
>
> I have argued with Walter on many different topics over nearly two decades
> and have utmost respect for him as a scholar, but this first argument I
> will always remember.
>
>
>
> Obviously, there’s a conflict between consciousness and the No-free Will
> theory. Wonder where we stand with regard to this conflict.
>
>
>
> Asim Roy
>
> Professor, Information Systems
>
> Arizona State University
>
> Lifeboat Foundation Bios: Professor Asim Roy
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e=>
>
> Asim Roy | iSearch (asu.edu)
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e=>
>
>
>
>
>
> *From:* Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> *On
> Behalf Of *Andras Lorincz
> *Sent:* Tuesday, February 15, 2022 6:50 AM
> *To:* Stephen José Hanson <jose at rubic.rutgers.edu>; Gary Marcus <
> gary.marcus at nyu.edu>
> *Cc:* Connectionists <Connectionists at cs.cmu.edu>
> *Subject:* Re: Connectionists: Weird beliefs about consciousness
>
>
>
> Dear Steve and Gary:
>
> This is how I see (try to understand) consciousness and the related terms:
>
> (Our) consciousness seems to be related to the close-to-deterministic
> nature of the episodes on from few hundred millisecond to a few second
> domain. Control instructions may leave our brain 200 ms earlier than the
> action starts and they become conscious only by that time. In addition,
> observations of those may also be delayed by a similar amount. (It then
> follows that the launching of the control actions is not conscious and --
> therefore -- free will can be debated in this very limited context.) On the
> other hand, model-based synchronization is necessary for timely
> observation, planning, decision making, and execution in a distributed and
> slow computational system. If this model-based synchronization is not
> working properly, then the observation of the world breaks and
> schizophrenic symptoms appear. As an example, individuals with pronounced
> schizotypal traits are particularly successful in self-tickling (source:
> https://philpapers.org/rec/LEMIWP
> <https://urldefense.com/v3/__https:/philpapers.org/rec/LEMIWP__;!!IKRxdwAv5BmarQ!P1ufmU5XnzpvjxtS2M0AnytlX24RNsoDeNPfsqUNWbF6OU5p9xMqtMj9S3Pn3cY$>,
> and a discussion on Asperger and schizophrenia:
> https://www.frontiersin.org/articles/10.3389/fpsyt.2020.503462/full
> <https://urldefense.com/v3/__https:/www.frontiersin.org/articles/10.3389/fpsyt.2020.503462/full__;!!IKRxdwAv5BmarQ!P1ufmU5XnzpvjxtS2M0AnytlX24RNsoDeNPfsqUNWbF6OU5p9xMqtMj9l5NkQt4$>)
> a manifestation of improper binding. The internal model enables and the
> synchronization requires the internal model and thus a certain level of
> consciousness can appear in a time interval around the actual time instant
> and its length depends on the short-term memory.
>
> Other issues, like separating the self from the rest of the world are more
> closely related to the soft/hard style interventions (as called in the
> recent deep learning literature), i.e., those components (features) that
> can be modified/controlled, e.g., color and speed, and the ones that are
> Lego-like and can be separated/amputed/occluded/added.
>
> Best,
>
> Andras
>
>
>
> ------------------------------------
>
> Andras Lorincz
>
> http://nipg.inf.elte.hu/
> <https://urldefense.com/v3/__http:/nipg.inf.elte.hu/__;!!IKRxdwAv5BmarQ!P1ufmU5XnzpvjxtS2M0AnytlX24RNsoDeNPfsqUNWbF6OU5p9xMqtMj9j2LbdH0$>
>
> Fellow of the European Association for Artificial Intelligence
>
> https://scholar.google.com/citations?user=EjETXQkAAAAJ&hl=en
> <https://urldefense.com/v3/__https:/scholar.google.com/citations?user=EjETXQkAAAAJ&hl=en__;!!IKRxdwAv5BmarQ!P1ufmU5XnzpvjxtS2M0AnytlX24RNsoDeNPfsqUNWbF6OU5p9xMqtMj99i1VRm0$>
>
> Department of Artificial Intelligence
>
> Faculty of Informatics
>
> Eotvos Lorand University
>
> Budapest, Hungary
>
>
>
>
>
>
> ------------------------------
>
> *From:* Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> on
> behalf of Stephen José Hanson <jose at rubic.rutgers.edu>
> *Sent:* Monday, February 14, 2022 8:30 PM
> *To:* Gary Marcus <gary.marcus at nyu.edu>
> *Cc:* Connectionists <connectionists at cs.cmu.edu>
> *Subject:* Re: Connectionists: Weird beliefs about consciousness
>
>
>
> Gary,  these weren't criterion.     Let me try again.
>
> I wasn't talking about wake-sleep cycles... I was talking about being
> awake or asleep and the transition that ensues..
>
> Rooba's don't sleep.. they turn off, I have two of them.  They turn on
> once (1) their batteries are recharged (2) a timer has been set for being
> turned on.
>
> GPT3 is essentially a CYC that actually works.. by reading Wikipedia
> (which of course is a terribly biased sample).
>
> I was indicating the difference between implicit and explicit
> learning/problem solving.    Implicit learning/memory is unconscious and
> similar to a habit.. (good or bad).
>
> I believe that when someone says "is gpt3 conscious?"  they are asking: is
> gpt3 self-aware?      Roombas know about vacuuming and they are unconscious.
>
> S
>
> On 2/14/22 12:45 PM, Gary Marcus wrote:
>
> Stephen,
>
>
>
> On criteria (1)-(3), a high-end, mapping-equippped Roomba is far more
> plausible as a consciousness than GPT-3.
>
>
>
> 1. The Roomba has a clearly defined wake-sleep cycle; GPT does not.
>
> 2. Roomba makes choices based on an explicit representation of its
> location relative to a mapped space. GPT lacks any consistent reflection of
> self; eg if you ask it, as I have, if you are you person, and then ask if
> it is a computer, it’s liable to say yes to both, showing no stable
> knowledge of self.
>
> 3. Roomba has explicit, declarative knowledge eg of walls and other
> boundaries, as well its own location. GPT has no systematically
> interrogable explicit representations.
>
>
>
> All this is said with tongue lodged partway in cheek, but I honestly don’t
> see what criterion would lead anyone to believe that GPT is a more
> plausible candidate for consciousness than any other AI program out there.
>
>
>
> ELIZA long ago showed that you could produce fluent speech that was mildly
> contextually relevant, and even convincing to the untutored; just because
> GPT is a better version of that trick doesn’t mean it’s any more conscious.
>
>
>
> Gary
>
>
>
> On Feb 14, 2022, at 08:56, Stephen José Hanson <jose at rubic.rutgers.edu>
> <jose at rubic.rutgers.edu> wrote:
>
> 
>
> this is a great list of behavior..
>
> Some biologically might be termed reflexive, taxes, classically
> conditioned, implicit (memory/learning)... all however would not be
> conscious in the several senses:  (1)  wakefulness-- sleep  (2) self aware
> (3) explicit/declarative.
>
> I think the term is used very loosely, and I believe what GPT3 and other
> AI are hoping to show signs of is "self-awareness"..
>
> In response to :  "why are you doing that?",  "What are you doing now",
> "what will you be doing in 2030?"
>
> Steve
>
>
>
> On 2/14/22 10:46 AM, Iam Palatnik wrote:
>
> A somewhat related question, just out of curiosity.
>
>
>
> Imagine the following:
>
>
>
> - An automatic solar panel that tracks the position of the sun.
>
> - A group of single celled microbes with phototaxis that follow the
> sunlight.
>
> - A jellyfish (animal without a brain) that follows/avoids the sunlight.
>
> - A cockroach (animal with a brain) that avoids the sunlight.
>
> - A drone with onboard AI that flies to regions of more intense sunlight
> to recharge its batteries.
>
> - A human that dislikes sunlight and actively avoids it.
>
>
>
> Can any of these, beside the human, be said to be aware or conscious of
> the sunlight, and why?
>
> What is most relevant? Being a biological life form, having a brain, being
> able to make decisions based on the environment? Being taxonomically close
> to humans?
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Mon, Feb 14, 2022 at 12:06 PM Gary Marcus <gary.marcus at nyu.edu> wrote:
>
> Also true: Many AI researchers are very unclear about what consciousness
> is and also very sure that ELIZA doesn’t have it.
>
> Neither ELIZA nor GPT-3 have
> - anything remotely related to embodiment
> - any capacity to reflect upon themselves
>
> Hypothesis: neither keyword matching nor tensor manipulation, even at
> scale, suffice in themselves to qualify for consciousness.
>
> - Gary
>
> > On Feb 14, 2022, at 00:24, Geoffrey Hinton <geoffrey.hinton at gmail.com>
> wrote:
> >
> > Many AI researchers are very unclear about what consciousness is and
> also very sure that GPT-3 doesn’t have it. It’s a strange combination.
> >
> >
>
> --
>
> --
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220218/3b707afe/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 19957 bytes
Desc: not available
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220218/3b707afe/attachment.png>


More information about the Connectionists mailing list