Connectionists: Sentient AI Survey Results

Grossberg, Stephen steve at bu.edu
Wed Jun 7 11:24:53 EDT 2023


Dear Steve,

There is a tendency to conflate AI with a particular neural network, Deep Learning.

Deep Learning, and related models, are biologically impossible and omit key processes that make humans intelligent, in addition to being UNTRUSTWORTHY (because they are NOT EXPLAINABLE) and UNRELIABLE (because they experience CATASTROPHIC FORGETTING).

In contrast, biological neural networks that have been developed and published in visible archival journals over the past 50+ years have none of these problems.

These models provide unified and principled explanations of many psychological and neurobiological facts about how our brains make our minds.

They have also been implemented over the last several decades in many large-scale applications in engineering, technology, and AI.

Along the way, they provide explanations of HOW, WHERE in our brains, and WHY from a deep computational perspective, humans can CONSCIOUSLY SEE, HEAR, FEEL, and KNOW about objects and events in a changing world that is filled with unexpected events, and use these conscious representations to PLAN, PREDICT, and ACT to realize VALUED GOALS.

From my perspective, a credible theory of consciousness needs to LINK brain mechanisms to conscious psychological experiences.

Without knowing the brain mechanisms, one does not understand HOW consciousness arises.

Without knowing the emergent psychological experiences, one does not understand the CONTENTS of conscious awareness.

Significantly, neural models that do this can be derived from THOUGHT EXPERIMENTS whose hypotheses are a few simple facts that we all know from our daily experiences.

My Magnum Opus

CONSCIOUS MIND: RESONANT BRAIN: HOW EACH BRAIN MAKES A MIND
https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552

provides a self-contained and non-technical overview and synthesis of how our brains make our conscious minds, and explains several ways in which consciousness may fail.

In the book, six different kinds of conscious awareness, with different functional roles, in different parts of our brains are classified and used to explain lots of interdisciplinary data.

All of them arise from BRAIN RESONANCES:

Surface-shroud resonances enable us to consciously see visual objects and scenes.

Feature-category resonances enable us to consciously recognize visual objects and scenes.

Stream-shroud resonances enable us to consciously hear auditory objects and streams.

Spectral-pitch-and-timbre resonances enable us to consciously recognize auditory objects and streams.

Item-list resonances enable us to consciously recognize speech and language.

Cognitive-emotional resonances enable us to consciously feel emotions and know their sources.

Best,

Steve
sites.bu.edu/steveg



From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> on behalf of Stephen Deiss <sdeiss at ucsd.edu>
Date: Monday, June 5, 2023 at 2:17 AM
To: Jeff Krichmar <jkrichma at uci.edu>
Cc: connectionists at cs.cmu.edu <connectionists at cs.cmu.edu>
Subject: Re: Connectionists: Sentient AI Survey Results
Hi Jeff,

Survey questions do not get much more complicated than this, and the social scientists and pollsters would spank us for using open-ended terminology like 'consciousness' to get an accurate opinion sample.

So let's start with a definition of consciousness that goes beyond the Nagel "There is something it is like."  It never has been very informative.  It was only useful to suppress arguments about what 'IT' is, and it helped to get philosophers off their fear of talking about consciousness.  From an epistemological standpoint, we have some kind of sensation of what we are conscious of even if it is only the itch in our head we call a thought.  We interpret these sensations by inference to their meaning.  The meaning might be the next thoughts (or subthreshold possible thoughts as happens with a feeling of understanding) or a belief of what is beyond the sensation such as the chair 'out there' beyond my visual or tactile sensations.  The meaning is our interpretation of the sensation, a set of inferences beyond it.  Furthermore, this interpretation lasts long enough to register in our working memory.  Otherwise, it passes by without notice like the scenes along the road while driving and talking.

So my definition is that consciousness is a process of interpreting sensations for their meaning and holding that meaning in working memory for some minimal time. Key terms are sensation, meaning, and memory.

Many think consciousness requires self-awareness or reflexiveness.  I don't because people sometimes 'lose themselves' enraptured in the experienced moment.  Music, lovemaking, extreme sports, psychedelic trips, and other things that can induce a fugue state are examples.  But our average consciousness is self-aware or has self-knowledge.  We usually know we are having an experience when we are having it.

So for question Q1, a preliminary hurdle might be that the AI system or robot have sensations or feelings.  That seems to imply robot-like embodiment, and that of a certain kind that can result in qualitative feelings to be interpreted (dare I say qualia?).  We tend to think of AI these days as being software running on a computer.  Neuroscientists think of consciousness as wide-spread coupled activity in cortical areas (Dennett, Baars, Dehaene, Llinas, Ribary, Singer, Tononi & Koch...).  This suggests to some multiple realizability might work if we can just get the mechanism right.

To me, it suggests that our mechanistic worldview with assumed or implied governing laws (governing from a platonic realm or from on-high) is overlooking the intrinsic nature of the things that participate in events - the real stuff.  Nature from the bottom up can be thought of as more organic - feeling its way along based on internal constraints.  Call it panpsychism, panexperientialism, or ... depending on desired flavor.  But the idea is that everything that happens in nature involves sensing, and the resulting event or action involved self-reference to the state of the system that would react as internally constrained.

From this perspective, Q1 is a definite yes, but for the AI to be like us, it needs to have feelings, not just an algorithm parroting what people say when they have feelings.  Without compassion and feelings to guide, a software bug could become a deadly monster.

For Q2: I think we should but it should be done with ethical concerns and studied regulation up front.  Right now the cat is out of the bag and out of control with every white hat and black hat hacker having near full access to the tools to do tremendous good and harm.  This has to be reigned in fast.  I was one of those who signed the FLI petition early on.  If for no other reason, this has to be done because when full AGI takes off capitalism, socialism, etc will have to be rethought as to how to support a society where there's not that much labor left for laborers to do.  Who will pay the taxes to keep up social security, medicare, public health, agriculture, infrastructure maintenance, and so on done by robots with AGI.  If 100 million Americans are laid off indefinitely, I think there will be a few pissed-off people marching on Washington making the last fiasco there look pretty tame.

For Q3: Not unless it can provably attain a level of consciousness with feelings to match humans.  Who decides that?  I do not know.  But I hope it is not the most gullible among us who mistake stochastic parroting for the real thing.  We are all easily fooled into projecting.  I eat things every day as a vegetarian panpsychist that I think have or had some level of awareness.  I have no problem turning off a computer or hitting reset no matter what algorithm is running on it.  But if the AI can convince me that it will feel the pain of death, and it has a track record of doing good things for all life, I would have to think twice even if it looked like a machine.  According to the biochemists, I'm a machine - just a special type with feelings.  If someone claims these feelings are illusions, ask them how they treat their friends or kids.

Thanks for the chance to weigh in.
Steve Deiss
UCSD I.N.C. (retired, but not done)


On Wed, May 31, 2023 at 4:13 AM Jeffrey L Krichmar <jkrichma at uci.edu<mailto:jkrichma at uci.edu>> wrote:
Dear Connectionists,

I am teaching an undergraduate course on “AI in Culture and Media”. Most students are in our Cognitive Sciences and Psychology programs. Last week we had a discussion and debate on AI, Consciousness, and Machine Ethics.  After the debate, around 70 students filled out a survey responding to these questions.

Q1: Do you think it is possible to build conscious or sentient AI?     65% answered yes.
Q2: Do you think we should build conscious or sentient AI?            22% answered yes
Q3: Do you think AI should have rights?                                          54% answered yes

I thought many of you would find this interesting.  And my students would like to hear your views on the topic.

Best regards,

Jeff Krichmar
Department of Cognitive Sciences
2328 Social & Behavioral Sciences Gateway
University of California, Irvine
Irvine, CA 92697-5100
jkrichma at uci.edu<mailto:jkrichma at uci.edu>
http://www.socsci.uci.edu/~jkrichma<https://urldefense.com/v3/__http:/www.socsci.uci.edu/*jkrichma__;fg!!Mih3wA!HDh_VR15vKKZHpLfS4NQNCBOnPXi4taXtClO-FkrXpNYzCK6sf3PtTd6GMYP_ZhCZaZ83AkV$>
https://www.penguinrandomhouse.com/books/716394/neurorobotics-by-tiffany-j-hwu-and-jeffrey-l-krichmar/<https://urldefense.com/v3/__https:/www.penguinrandomhouse.com/books/716394/neurorobotics-by-tiffany-j-hwu-and-jeffrey-l-krichmar/__;!!Mih3wA!HDh_VR15vKKZHpLfS4NQNCBOnPXi4taXtClO-FkrXpNYzCK6sf3PtTd6GMYP_ZhCZT4hUUls$>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230607/9da87583/attachment.html>


More information about the Connectionists mailing list