<div dir="ltr"><div>...'"consciousness as continuous self-awareness" could apply to corporations and other organizations'...</div><div><br></div><div>And we have made it back around to Stafford Beer's <u>Brain of the Firm</u> and Viable System Model</div><div><br></div><a href="https://en.wikipedia.org/wiki/Viable_system_model">Viable system model - Wikipedia</a><br><div><br></div><div>Cheers!</div><div><br></div><div>Sean</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Jun 2, 2023 at 2:19 AM Dietterich, Thomas <<a href="mailto:tgd@oregonstate.edu">tgd@oregonstate.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Janet,<br>
<br>
Yes, I think "consciousness as continuous self-awareness" could apply to corporations and other organizations. I'm not sure how to interpret your question about extended phenotype. From the perspective of assigning legal responsibility, while companies like to blame their software, the courts assign blame to companies and to their management. Is that what you were getting at?<br>
<br>
--Tom<br>
<br>
Thomas G. Dietterich, Distinguished Professor Voice: 541-737-5559<br>
School of Electrical Engineering FAX: 541-737-1300<br>
and Computer Science URL: <a href="http://eecs.oregonstate.edu/~tgd" rel="noreferrer" target="_blank">eecs.oregonstate.edu/~tgd</a><br>
US Mail: 1148 Kelley Engineering Center<br>
Office: 2067 Kelley Engineering Center<br>
Oregon State Univ., Corvallis, OR 97331-5501<br>
<br>
-----Original Message-----<br>
From: Janet Wiles <<a href="mailto:j.wiles@uq.edu.au" target="_blank">j.wiles@uq.edu.au</a>><br>
Sent: Thursday, June 1, 2023 15:37<br>
To: Dietterich, Thomas <<a href="mailto:tgd@oregonstate.edu" target="_blank">tgd@oregonstate.edu</a>>; Jeffrey L Krichmar <<a href="mailto:jkrichma@uci.edu" target="_blank">jkrichma@uci.edu</a>><br>
Cc: Connectionists List <<a href="mailto:connectionists@cs.cmu.edu" target="_blank">connectionists@cs.cmu.edu</a>><br>
Subject: RE: Connectionists: Sentient AI Survey Results<br>
<br>
[This email originated from outside of OSU. Use caution with links and attachments.]<br>
<br>
Tom and Jeff,<br>
Governments already grant rights to non-biological entities in the form of private and public companies. Continuous self-monitoring is certainly part of some companies. Could your definition of consciousness stretch to these?<br>
The discussion around AI technology presupposes an AI entity that operates independently of any human. But technology is (currently) developed and deployed by people for their own reasons. Is a company's technology more than its extended phenotype?<br>
Regards<br>
Janet<br>
<br>
Professor Janet Wiles<br>
she/hers<br>
The University of Queensland<br>
<br>
HCAI Research at UQ: <a href="http://itee.uq.edu.au/research/human-centred-computing/human-centred-ai" rel="noreferrer" target="_blank">itee.uq.edu.au/research/human-centred-computing/human-centred-ai</a><br>
* Where is the Human in HCAI? insights from developer priorities and user experiences <a href="http://doi.org/10.1016/j.chb.2022.107617" rel="noreferrer" target="_blank">doi.org/10.1016/j.chb.2022.107617</a><br>
* Enlarging the model of the human at the heart of HCAI <a href="http://doi.org/10.1016/j.newideapsych.2023.101025" rel="noreferrer" target="_blank">doi.org/10.1016/j.newideapsych.2023.101025</a><br>
<br>
<br>
-----Original Message-----<br>
From: Connectionists <<a href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu" target="_blank">connectionists-bounces@mailman.srv.cs.cmu.edu</a>> On Behalf Of Dietterich, Thomas<br>
Sent: Thursday, June 1, 2023 3:37 AM<br>
To: Jeffrey L Krichmar <<a href="mailto:jkrichma@uci.edu" target="_blank">jkrichma@uci.edu</a>><br>
Cc: Connectionists List <<a href="mailto:connectionists@cs.cmu.edu" target="_blank">connectionists@cs.cmu.edu</a>><br>
Subject: Re: Connectionists: Sentient AI Survey Results<br>
<br>
My views:<br>
1. I think we can build a conscious AI system if we define consciousness in terms of continuous self-awareness. Indeed, continuous self-monitoring is an important function of existing computer operating systems and data centers. We should build these functions into our systems to detect failures and prevent errors.<br>
<br>
2. Regarding rights, there is no clear definition of an AI system the way there is an obvious definition of a human being or a dog. An AI system may not even have a definite location or body, for example, because it is code that is running simultaneously on data centers around the world or in a constellation of satellites in earth orbit. An AI system may be placed into a suspended state and then restarted (or restarted from a previous checkpoint). What would it mean, for example, for such systems to a have a right to bodily autonomy? Wouldn't it be ok to suspend them as long as they could be "revived" later? Even people go to sleep and thereby go through time periods when they lack continuous awareness.<br>
<br>
I think an interesting set of ideas come from Strawson's famous essay on Freedom and Resentment. Perhaps, as AI systems continue to develop, we will come to treat some of them as moral agents responsible for their actions. We will resent them when they act with bad intentions and feel warmly toward them when they act with our best interests in mind. Such socially-competent agents that act with deep understanding of human society might deserve rights because of the harms to society that would arise if they were not given those protections. In short, the decision to grant rights (and which rights) will depend on society's evolving attitude toward these systems and their behavior.<br>
<br>
--Tom Dietterich<br>
<br>
Thomas G. Dietterich, Distinguished Professor Voice: 541-737-5559<br>
School of Electrical Engineering FAX: 541-737-1300<br>
and Computer Science URL: <a href="http://eecs.oregonstate.edu/~tgd" rel="noreferrer" target="_blank">eecs.oregonstate.edu/~tgd</a><br>
US Mail: 1148 Kelley Engineering Center<br>
Office: 2067 Kelley Engineering Center<br>
Oregon State Univ., Corvallis, OR 97331-5501<br>
<br>
-----Original Message-----<br>
From: Connectionists <<a href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu" target="_blank">connectionists-bounces@mailman.srv.cs.cmu.edu</a>> On Behalf Of Jeffrey L Krichmar<br>
Sent: Tuesday, May 30, 2023 14:30<br>
To: <a href="mailto:connectionists@cs.cmu.edu" target="_blank">connectionists@cs.cmu.edu</a><br>
Subject: Connectionists: Sentient AI Survey Results<br>
<br>
[This email originated from outside of OSU. Use caution with links and attachments.]<br>
<br>
Dear Connectionists,<br>
<br>
I am teaching an undergraduate course on "AI in Culture and Media". Most students are in our Cognitive Sciences and Psychology programs. Last week we had a discussion and debate on AI, Consciousness, and Machine Ethics. After the debate, around 70 students filled out a survey responding to these questions.<br>
<br>
Q1: Do you think it is possible to build conscious or sentient AI? 65% answered yes.<br>
Q2: Do you think we should build conscious or sentient AI? 22% answered yes<br>
Q3: Do you think AI should have rights? 54% answered yes<br>
<br>
I thought many of you would find this interesting. And my students would like to hear your views on the topic.<br>
<br>
Best regards,<br>
<br>
Jeff Krichmar<br>
Department of Cognitive Sciences<br>
2328 Social & Behavioral Sciences Gateway University of California, Irvine Irvine, CA 92697-5100 <a href="mailto:jkrichma@uci.edu" target="_blank">jkrichma@uci.edu</a> <a href="http://www.socsci.uci.edu/~jkrichma" rel="noreferrer" target="_blank">http://www.socsci.uci.edu/~jkrichma</a><br>
<a href="https://www.penguinrandomhouse.com/books/716394/neurorobotics-by-tiffany-j-hwu-and-jeffrey-l-krichmar/" rel="noreferrer" target="_blank">https://www.penguinrandomhouse.com/books/716394/neurorobotics-by-tiffany-j-hwu-and-jeffrey-l-krichmar/</a><br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
</blockquote></div>