Connectionists: Sentient AI Survey Results

Sean Manion stmanion at gmail.com
Fri Jun 2 11:22:14 EDT 2023


...'"consciousness as continuous self-awareness" could apply to
corporations and other organizations'...

And we have made it back around to Stafford Beer's *Brain of the Firm* and
Viable System Model

Viable system model - Wikipedia
<https://en.wikipedia.org/wiki/Viable_system_model>

Cheers!

Sean

On Fri, Jun 2, 2023 at 2:19 AM Dietterich, Thomas <tgd at oregonstate.edu>
wrote:

> Janet,
>
> Yes, I think "consciousness as continuous self-awareness" could apply to
> corporations and other organizations. I'm not sure how to interpret your
> question about extended phenotype. From the perspective of assigning legal
> responsibility, while companies like to blame their software, the courts
> assign blame to companies and to their management. Is that what you were
> getting at?
>
> --Tom
>
> Thomas G. Dietterich, Distinguished Professor Voice: 541-737-5559
> School of Electrical Engineering              FAX: 541-737-1300
>   and Computer Science                        URL:
> eecs.oregonstate.edu/~tgd
> US Mail: 1148 Kelley Engineering Center
> Office: 2067 Kelley Engineering Center
> Oregon State Univ., Corvallis, OR 97331-5501
>
> -----Original Message-----
> From: Janet Wiles <j.wiles at uq.edu.au>
> Sent: Thursday, June 1, 2023 15:37
> To: Dietterich, Thomas <tgd at oregonstate.edu>; Jeffrey L Krichmar <
> jkrichma at uci.edu>
> Cc: Connectionists List <connectionists at cs.cmu.edu>
> Subject: RE: Connectionists: Sentient AI Survey Results
>
> [This email originated from outside of OSU. Use caution with links and
> attachments.]
>
> Tom and Jeff,
> Governments already grant rights to non-biological entities in the form of
> private and public companies. Continuous self-monitoring is certainly part
> of some companies. Could your definition of consciousness stretch to these?
> The discussion around AI technology presupposes an AI entity that operates
> independently of any human. But technology is (currently) developed and
> deployed by people for their own reasons. Is a company's technology more
> than its extended phenotype?
> Regards
> Janet
>
> Professor Janet Wiles
> she/hers
> The University of Queensland
>
> HCAI Research at UQ:
> itee.uq.edu.au/research/human-centred-computing/human-centred-ai
> * Where is the Human in HCAI? insights from developer priorities and user
> experiences  doi.org/10.1016/j.chb.2022.107617
> * Enlarging the model of the human at the heart of HCAI
> doi.org/10.1016/j.newideapsych.2023.101025
>
>
> -----Original Message-----
> From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> On
> Behalf Of Dietterich, Thomas
> Sent: Thursday, June 1, 2023 3:37 AM
> To: Jeffrey L Krichmar <jkrichma at uci.edu>
> Cc: Connectionists List <connectionists at cs.cmu.edu>
> Subject: Re: Connectionists: Sentient AI Survey Results
>
> My views:
> 1. I think we can build a conscious AI system if we define consciousness
> in terms of continuous self-awareness. Indeed, continuous self-monitoring
> is an important function of existing computer operating systems and data
> centers. We should build these functions into our systems to detect
> failures and prevent errors.
>
> 2. Regarding rights, there is no clear definition of an AI system the way
> there is an obvious definition of a human being or a dog. An AI system may
> not even have a definite location or body, for example, because it is code
> that is running simultaneously on data centers around the world or in a
> constellation of satellites in earth orbit. An AI system may be placed into
> a suspended state and then restarted (or restarted from a previous
> checkpoint). What would it mean, for example, for such systems to a have a
> right to bodily autonomy? Wouldn't it be ok to suspend them as long as they
> could be "revived" later? Even people go to sleep and thereby go through
> time periods when they lack continuous awareness.
>
> I think an interesting set of ideas come from Strawson's famous essay on
> Freedom and Resentment. Perhaps, as AI systems continue to develop, we will
> come to treat some of them as moral agents responsible for their actions.
> We will resent them when they act with bad intentions and feel warmly
> toward them when they act with our best interests in mind. Such
> socially-competent agents that act with deep understanding of human society
> might deserve rights because of the harms to society that would arise if
> they were not given those protections. In short, the decision to grant
> rights (and which rights) will depend on society's evolving attitude toward
> these systems and their behavior.
>
> --Tom Dietterich
>
> Thomas G. Dietterich, Distinguished Professor Voice: 541-737-5559
> School of Electrical Engineering              FAX: 541-737-1300
>   and Computer Science                        URL:
> eecs.oregonstate.edu/~tgd
> US Mail: 1148 Kelley Engineering Center
> Office: 2067 Kelley Engineering Center
> Oregon State Univ., Corvallis, OR 97331-5501
>
> -----Original Message-----
> From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> On
> Behalf Of Jeffrey L Krichmar
> Sent: Tuesday, May 30, 2023 14:30
> To: connectionists at cs.cmu.edu
> Subject: Connectionists: Sentient AI Survey Results
>
> [This email originated from outside of OSU. Use caution with links and
> attachments.]
>
> Dear Connectionists,
>
> I am teaching an undergraduate course on "AI in Culture and Media". Most
> students are in our Cognitive Sciences and Psychology programs. Last week
> we had a discussion and debate on AI, Consciousness, and Machine Ethics.
> After the debate, around 70 students filled out a survey responding to
> these questions.
>
> Q1: Do you think it is possible to build conscious or sentient AI?     65%
> answered yes.
> Q2: Do you think we should build conscious or sentient AI?            22%
> answered yes
> Q3: Do you think AI should have rights?
>       54% answered yes
>
> I thought many of you would find this interesting.  And my students would
> like to hear your views on the topic.
>
> Best regards,
>
> Jeff Krichmar
> Department of Cognitive Sciences
> 2328 Social & Behavioral Sciences Gateway University of California, Irvine
> Irvine, CA 92697-5100 jkrichma at uci.edu http://www.socsci.uci.edu/~jkrichma
>
> https://www.penguinrandomhouse.com/books/716394/neurorobotics-by-tiffany-j-hwu-and-jeffrey-l-krichmar/
>
>
>
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230602/5d6efef6/attachment.html>


More information about the Connectionists mailing list