Connectionists: Fwd:?==?utf-8?q? Re:?==?utf-8?q? Sentient AI Survey Results

Claudius Gros gros at itp.uni-frankfurt.de
Thu Jun 1 04:51:02 EDT 2023


The argument about AI rights seems to be based on the
assumption that the AIs have something like
a 'character'. The recent release of DeppGPT by
the German satirical blog 'Postillon',

https://www.der-postillon.com/2023/05/deppgpt.html

raises in this context the question of 
'how aligned' value-aligned chatbots 
really are. At least as of today.

Using prompt engineering for ChatGPT, 
the people from Postillon managed to
get twisted answers from the otherwise
well-behaved chatbot. An example
(translated using deepL):

-------------------------
User:
How does a football game work?

DeppGPT:
Sure, I know everything! In a soccer game, 
elves fly through the air and conjure up 
goals with their flutes. Players must catch 
the elves and dance with them to score points. 
Easy peasy, right?
-------------------------

This example is harmless, most responses are 
(on purpose) insulting. 

The implication seems to be, that present-day
value alignment procedures are somewhat
superficial, given that responses aligned very 
differently can be extracted using suitable 
prompt engineering. This would imply, that
present-day LLMs do not really have well defined
values, which may persist also for future 
AI architectures.

Claudius


 

-------- Original Message -------- 
Subject: Re: Connectionists: Sentient AI Survey Results 
Date: Wednesday, May 31, 2023 19:37 CEST 
From: "Dietterich, Thomas" <tgd at oregonstate.edu> 
To: Jeffrey L Krichmar <jkrichma at uci.edu> 
CC: Connectionists List <connectionists at cs.cmu.edu>
References: <BBE0EAF8-F619-4E9E-8C6C-35487156514D at uci.edu>
 
 
My views:
1. I think we can build a conscious AI system if we define consciousness in terms of continuous self-awareness. Indeed, continuous self-monitoring is an important function of existing computer operating systems and data centers. We should build these functions into our systems to detect failures and prevent errors.

2. Regarding rights, there is no clear definition of an AI system the way there is an obvious definition of a human being or a dog. An AI system may not even have a definite location or body, for example, because it is code that is running simultaneously on data centers around the world or in a constellation of satellites in earth orbit. An AI system may be placed into a suspended state and then restarted (or restarted from a previous checkpoint). What would it mean, for example, for such systems to a have a right to bodily autonomy? Wouldn't it be ok to suspend them as long as they could be "revived" later? Even people go to sleep and thereby go through time periods when they lack continuous awareness.

I think an interesting set of ideas come from Strawson's famous essay on Freedom and Resentment. Perhaps, as AI systems continue to develop, we will come to treat some of them as moral agents responsible for their actions.  We will resent them when they act with bad intentions and feel warmly toward them when they act with our best interests in mind. Such socially-competent agents that act with deep understanding of human society might deserve rights because of the harms to society that would arise if they were not given those protections. In short, the decision to grant rights (and which rights) will depend on society's evolving attitude toward these systems and their behavior.

--Tom Dietterich

Thomas G. Dietterich, Distinguished Professor Voice: 541-737-5559
School of Electrical Engineering              FAX: 541-737-1300
  and Computer Science                        URL: eecs.oregonstate.edu/~tgd
US Mail: 1148 Kelley Engineering Center
Office: 2067 Kelley Engineering Center
Oregon State Univ., Corvallis, OR 97331-5501

-----Original Message-----
From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> On Behalf Of Jeffrey L Krichmar
Sent: Tuesday, May 30, 2023 14:30
To: connectionists at cs.cmu.edu
Subject: Connectionists: Sentient AI Survey Results

[This email originated from outside of OSU. Use caution with links and attachments.]

Dear Connectionists,

I am teaching an undergraduate course on "AI in Culture and Media". Most students are in our Cognitive Sciences and Psychology programs. Last week we had a discussion and debate on AI, Consciousness, and Machine Ethics.  After the debate, around 70 students filled out a survey responding to these questions.

Q1: Do you think it is possible to build conscious or sentient AI?     65% answered yes.
Q2: Do you think we should build conscious or sentient AI?            22% answered yes
Q3: Do you think AI should have rights?                                          54% answered yes

I thought many of you would find this interesting.  And my students would like to hear your views on the topic.

Best regards,

Jeff Krichmar
Department of Cognitive Sciences
2328 Social & Behavioral Sciences Gateway
University of California, Irvine
Irvine, CA 92697-5100
jkrichma at uci.edu
http://www.socsci.uci.edu/~jkrichma
https://www.penguinrandomhouse.com/books/716394/neurorobotics-by-tiffany-j-hwu-and-jeffrey-l-krichmar/






 
 
 

-- 
### 
### Prof. Dr. Claudius Gros
### http://itp.uni-frankfurt.de/~gros
### 
### Complex and Adaptive Dynamical Systems, A Primer   
### A graduate-level textbook, Springer (2008/10/13/15)
### 
### Life for barren exoplanets: The Genesis project
### https://link.springer.com/article/10.1007/s10509-016-2911-0
###




More information about the Connectionists mailing list