Connectionists: Sentient AI Survey Results: From crayfish swimmerets to human cognition: An evolutionary analysis
Grossberg, Stephen
steve at bu.edu
Thu Jun 8 10:51:48 EDT 2023
Dear Steve.
Thanks very much for asking the following very interesting question:
“The claim is that ART and CogEM together provide a backdoor explanation of consciousness in any system. But is it assumed that the system has to be capable of emotion, learning, and memory?... how far down do you think such systems go in nature?”
All my work takes an evolutionary perspective to try to answer just that question. Of course, we cannot redo evolution. But models can try to recapitulate a kind of “conceptual evolution”.
My Magnum Opus
https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552
is structured to do just that.
For example, on p. 495 of Chapter 13, I ask:
“What is the simplest network that can learn to perform an arbitrarily complicated sequence of actions, such as a piano sonata, dance, or other skilled sequence of actions. What is the minimum number of cells needed to do this?
The answer is: ONE!”
I call this kind of circuit an avalanche.
This answer immediately clarifies how many species can do complicated things with small nervous systems.
It also raises the question: Why do our brains need so many cells?
The main problems with a one-cell avalanche are that its performance is entirely ritualistic and insensitive to environmental feedback.
I then show how, by incrementally adding mechanisms that provide different kinds of behavioral flexibility and sensitivity to environmental feedback, increasingly powerful mechanisms of learning, cognition, emotion, and action come into view.
Along the way, identifiable circuits of organisms, as varied as crayfish swimmerets and songbird pattern generators, are mechanistically explained.
This is all summarized in Chapter 13 of my book. You might particularly want to look at Figure 13.22 for evolutionary precursors of cognition and emotion.
This evolutionary analysis leads to some sophisticated explanations and predictions, including the role of dendritic back-propagating action potentials and Calcium currents in the regulation of associative learning and stable memory on dendritic spines (see Figure 13.38).
For example, on p. 511, I wrote:
“Consistent with this explanation, Guang Yang, Feng Pan, and Wen- Biao Gan have shown in their 2009 article in Nature that dendritic spines can maintain memories for an entire lifetime in their experimental rats (Yang, Pan, and Gan, 2009). Such stable memories also help a CogEM model with a READ opponent process to explain, in addition to basic phenomena like primary and secondary excitatory and inhibitory conditioning, the persistence of instrumental avoidance behaviors and why Pavlovian conditioned inhibitors do not extinguish, among other conditioning data (Grossberg, 1972a; Grossberg and Schmajuk, 1987; Kamin, Brimer, and Black, 1963; Lysle and Fowler, 1985; Maier, Seligman, and Solomon, 1969; Miller and Schachtman, 1985; Owren and Kaplan, 1981; Solomon, Kamin, and Wynne, 1953; Witcher, 1978; Zimmer- Hart and Rescorla, 1974). It is instructive to summarize how the model explains why Pavlovian conditioned excitators do extinguish, but conditioned inhibitors do not. Before doing so, let me also point out that the dissociation of LTM read- out and read- in also helps to explain a phenomenon that has been the subject of intense experimental study during the past 20 years; namely, how reconsolidation of memories occurs when old LTM is read out, and how that may provide a powerful tool for combatting mental disorders like post-traumatic stress disorder, or PTSD.”
Best again,
Steve
From: Stephen Deiss <sdeiss at ucsd.edu>
Date: Thursday, June 8, 2023 at 1:03 AM
To: Grossberg, Stephen <steve at bu.edu>
Cc: Jeff Krichmar <jkrichma at uci.edu>, connectionists at cs.cmu.edu <connectionists at cs.cmu.edu>
Subject: Re: Connectionists: Sentient AI Survey Results: How analysis of learning without catastrophic forgetting led to neural models of conscious brain states
Steve,
This is very interesting. Let me read your tome and get back with some better informed questions if you have time for a few. One I have upfront follows.
The claim is that ART and CogEM together provide a backdoor explanation of consciousness in any system. But is it assumed that the system has to be capable of emotion, learning, and memory? Those are 4 loaded terms. So how far down do you think such systems go in nature?
Sincerely,
Steve D.
On Wed, Jun 7, 2023 at 3:08 PM Grossberg, Stephen <steve at bu.edu<mailto:steve at bu.edu>> wrote:
Dear Steve,
Thanks for your prompt and thoughtful reply!
I will respond mostly to your question:
“I think a question remains after all the details get worked out in how consciousness arises in brains, why should it only happen at that level of natural resonant oscillation?”
I should at the outset note that my first discoveries about CONSCIOUSNESS emerged from my work on how humans LEARN quickly without being forced to forget just as quickly.
Otherwise, expressed:
How do we learn quickly without experiencing catastrophic forgetting?
I called this problem the stability-plasticity dilemma.
Starting in 1976, I started to solve this problem when I introduced Adaptive Resonance Theory, or ART.
After incremental principled development to the present time, ART is now the most advanced cognitive and neural theory of how our brains learn to attend, recognize, and predict objects and events in a changing world that is filled with unexpected events.
This claim is supported in several ways:
All the foundational hypotheses of ART have been supported by subsequent psychological and neurobiological experiments.
ART has also provided principled and unifying explanations of scores of additional experiments.
Last but not least, by 1980, I published in an oft-cited article in Psychological Review, a THOUGHT EXPERIMENT which shows that ART systems are the UNIQUE solutions of the problem of how ANY system can AUTONOMOUSLY LEARN to CORRECT PREDICTIVE ERRORS in a changing world that is filled with unexpected events.
The CogEM (Cognitive-Emotional-Motor) model was also derived from a THOUGHT EXPERIMENT and explains lots of interdisciplinary data about how cognition and emotion interact to achieve valued goals.
The hypotheses used to derive these models are familiar facts that we all know from our daily experiences. Thus, if you cannot find a logical flaw in the thought experiments, they logically follow from undeniable facts. No one has, to the best of my knowledge, yet reported such a logical flaw.
Morevoer, these facts never mention mind or brain.
Thus ART and CogEM are UNIVERSAL solutions of these learning and prediction problems.
Moreover, both classes of models solve their problems using different kinds of ADAPTIVE RESONANCES.
Explanations of many other kinds of data fell out of the wash:
For example, these models explain how specific breakdowns in brain mechanisms cause behavioral symptoms of MENTAL DISORDERS, including Alzheimer's disease, autism, amnesia, schizophrenia, and ADHD.
I only realized later that ART and CogEM together also explain HOW, WHERE in our brains, and WHY our brains support CONSCIOUS STATES OF SEEING, HEARING, FEELING, and KNOWING, and use these conscious states to PLAN and ACT to realize valued goals.
This latter realization arose after I used ART to provide unified and principled explanations of how interacting brain mechanisms gave rise to parametric properties of psychological behaviors.
I gradually realized that the psychological behaviors being explained were conscious. I had, through the back door as it were, discovered how adaptive resonances generate conscious behaviors.
The classification of six resonances that I listed in my earlier email gradually arose from similar principled explanations of different kinds of psychological experiences.
As I earlier mentioned, I explain all this in a self-contained and non-technical way in my Magnum Opus:
https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552<https://urldefense.com/v3/__https:/www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552__;!!Mih3wA!ECkl6YgJGiyeT8S7ckGRdwjdMkTaH6l8kLD7GcVKJL9seMtkfwiee-KBJ6ojPAveG83T8niE_9Y$>
Best again,
Steve
From: Stephen Deiss <sdeiss at ucsd.edu<mailto:sdeiss at ucsd.edu>>
Date: Wednesday, June 7, 2023 at 12:17 PM
To: Grossberg, Stephen <steve at bu.edu<mailto:steve at bu.edu>>
Cc: Jeff Krichmar <jkrichma at uci.edu<mailto:jkrichma at uci.edu>>, connectionists at cs.cmu.edu<mailto:connectionists at cs.cmu.edu> <connectionists at cs.cmu.edu<mailto:connectionists at cs.cmu.edu>>
Subject: Re: Connectionists: Sentient AI Survey Results
Hi Steve,
Thanks for adding these comments. My bad for not including you on that list of consciousness researchers. I admit to having your book on my shelf but I have not started on it yet. I'm somewhat familiar with ART and its historical development, and resonance fits right in with my summary term of 'coupled areas.' (Actually, I started down my own quest reading many of your papers about ART in the early 80's after we had lunch at NTSU in Denton, TX where you spoke and gave what the conference host called your 'big mountain' view as you may recall.). Resonance or global coherent oscillations in brains, ignoring finer distinctions, has always seemed a non-brainer to me since reading Hebb (OoB). Your work developed Hebb's intuitions into a quantitative and experimentally supported coherent theory. Thank you for sharing the synopsis for all here.
I think a question remains after all the details get worked out in how consciousness arises in brains, why should it only happen at that level of natural resonant oscillation? Resonance or what underlies it may go very deep. If socially-informed self-awareness is dropped as an assumed requirement, and if 'sensation' is treated as more than a metaphor for what happens in more elementary systems participating in events, then there is an opening for a paradigm shift in how we think about conscious systems. This has no small bearing on how an artifact could be conscious too. I will have more to present about this bottom-up thinking about consciousness soon in a planned publication.
Best,
Steve
On Wed, Jun 7, 2023 at 8:24 AM Grossberg, Stephen <steve at bu.edu<mailto:steve at bu.edu>> wrote:
Dear Steve,
There is a tendency to conflate AI with a particular neural network, Deep Learning.
Deep Learning, and related models, are biologically impossible and omit key processes that make humans intelligent, in addition to being UNTRUSTWORTHY (because they are NOT EXPLAINABLE) and UNRELIABLE (because they experience CATASTROPHIC FORGETTING).
In contrast, biological neural networks that have been developed and published in visible archival journals over the past 50+ years have none of these problems.
These models provide unified and principled explanations of many psychological and neurobiological facts about how our brains make our minds.
They have also been implemented over the last several decades in many large-scale applications in engineering, technology, and AI.
Along the way, they provide explanations of HOW, WHERE in our brains, and WHY from a deep computational perspective, humans can CONSCIOUSLY SEE, HEAR, FEEL, and KNOW about objects and events in a changing world that is filled with unexpected events, and use these conscious representations to PLAN, PREDICT, and ACT to realize VALUED GOALS.
From my perspective, a credible theory of consciousness needs to LINK brain mechanisms to conscious psychological experiences.
Without knowing the brain mechanisms, one does not understand HOW consciousness arises.
Without knowing the emergent psychological experiences, one does not understand the CONTENTS of conscious awareness.
Significantly, neural models that do this can be derived from THOUGHT EXPERIMENTS whose hypotheses are a few simple facts that we all know from our daily experiences.
My Magnum Opus
CONSCIOUS MIND: RESONANT BRAIN: HOW EACH BRAIN MAKES A MIND
https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552<https://urldefense.com/v3/__https:/www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552__;!!Mih3wA!EadrIVIL2DqjS4bZ3vkctcvdyc9hTpeWv_1r1RTEDkA9GgQG2lYHXCU0LmD7956z0BrgikH8Xe8$>
provides a self-contained and non-technical overview and synthesis of how our brains make our conscious minds, and explains several ways in which consciousness may fail.
In the book, six different kinds of conscious awareness, with different functional roles, in different parts of our brains are classified and used to explain lots of interdisciplinary data.
All of them arise from BRAIN RESONANCES:
Surface-shroud resonances enable us to consciously see visual objects and scenes.
Feature-category resonances enable us to consciously recognize visual objects and scenes.
Stream-shroud resonances enable us to consciously hear auditory objects and streams.
Spectral-pitch-and-timbre resonances enable us to consciously recognize auditory objects and streams.
Item-list resonances enable us to consciously recognize speech and language.
Cognitive-emotional resonances enable us to consciously feel emotions and know their sources.
Best,
Steve
sites.bu.edu/steveg<https://urldefense.com/v3/__http:/sites.bu.edu/steveg__;!!Mih3wA!ECkl6YgJGiyeT8S7ckGRdwjdMkTaH6l8kLD7GcVKJL9seMtkfwiee-KBJ6ojPAveG83TljEUdmM$>
From: Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu<mailto:connectionists-bounces at mailman.srv.cs.cmu.edu>> on behalf of Stephen Deiss <sdeiss at ucsd.edu<mailto:sdeiss at ucsd.edu>>
Date: Monday, June 5, 2023 at 2:17 AM
To: Jeff Krichmar <jkrichma at uci.edu<mailto:jkrichma at uci.edu>>
Cc: connectionists at cs.cmu.edu<mailto:connectionists at cs.cmu.edu> <connectionists at cs.cmu.edu<mailto:connectionists at cs.cmu.edu>>
Subject: Re: Connectionists: Sentient AI Survey Results
Hi Jeff,
Survey questions do not get much more complicated than this, and the social scientists and pollsters would spank us for using open-ended terminology like 'consciousness' to get an accurate opinion sample.
So let's start with a definition of consciousness that goes beyond the Nagel "There is something it is like." It never has been very informative. It was only useful to suppress arguments about what 'IT' is, and it helped to get philosophers off their fear of talking about consciousness. From an epistemological standpoint, we have some kind of sensation of what we are conscious of even if it is only the itch in our head we call a thought. We interpret these sensations by inference to their meaning. The meaning might be the next thoughts (or subthreshold possible thoughts as happens with a feeling of understanding) or a belief of what is beyond the sensation such as the chair 'out there' beyond my visual or tactile sensations. The meaning is our interpretation of the sensation, a set of inferences beyond it. Furthermore, this interpretation lasts long enough to register in our working memory. Otherwise, it passes by without notice like the scenes along the road while driving and talking.
So my definition is that consciousness is a process of interpreting sensations for their meaning and holding that meaning in working memory for some minimal time. Key terms are sensation, meaning, and memory.
Many think consciousness requires self-awareness or reflexiveness. I don't because people sometimes 'lose themselves' enraptured in the experienced moment. Music, lovemaking, extreme sports, psychedelic trips, and other things that can induce a fugue state are examples. But our average consciousness is self-aware or has self-knowledge. We usually know we are having an experience when we are having it.
So for question Q1, a preliminary hurdle might be that the AI system or robot have sensations or feelings. That seems to imply robot-like embodiment, and that of a certain kind that can result in qualitative feelings to be interpreted (dare I say qualia?). We tend to think of AI these days as being software running on a computer. Neuroscientists think of consciousness as wide-spread coupled activity in cortical areas (Dennett, Baars, Dehaene, Llinas, Ribary, Singer, Tononi & Koch...). This suggests to some multiple realizability might work if we can just get the mechanism right.
To me, it suggests that our mechanistic worldview with assumed or implied governing laws (governing from a platonic realm or from on-high) is overlooking the intrinsic nature of the things that participate in events - the real stuff. Nature from the bottom up can be thought of as more organic - feeling its way along based on internal constraints. Call it panpsychism, panexperientialism, or ... depending on desired flavor. But the idea is that everything that happens in nature involves sensing, and the resulting event or action involved self-reference to the state of the system that would react as internally constrained.
From this perspective, Q1 is a definite yes, but for the AI to be like us, it needs to have feelings, not just an algorithm parroting what people say when they have feelings. Without compassion and feelings to guide, a software bug could become a deadly monster.
For Q2: I think we should but it should be done with ethical concerns and studied regulation up front. Right now the cat is out of the bag and out of control with every white hat and black hat hacker having near full access to the tools to do tremendous good and harm. This has to be reigned in fast. I was one of those who signed the FLI petition early on. If for no other reason, this has to be done because when full AGI takes off capitalism, socialism, etc will have to be rethought as to how to support a society where there's not that much labor left for laborers to do. Who will pay the taxes to keep up social security, medicare, public health, agriculture, infrastructure maintenance, and so on done by robots with AGI. If 100 million Americans are laid off indefinitely, I think there will be a few pissed-off people marching on Washington making the last fiasco there look pretty tame.
For Q3: Not unless it can provably attain a level of consciousness with feelings to match humans. Who decides that? I do not know. But I hope it is not the most gullible among us who mistake stochastic parroting for the real thing. We are all easily fooled into projecting. I eat things every day as a vegetarian panpsychist that I think have or had some level of awareness. I have no problem turning off a computer or hitting reset no matter what algorithm is running on it. But if the AI can convince me that it will feel the pain of death, and it has a track record of doing good things for all life, I would have to think twice even if it looked like a machine. According to the biochemists, I'm a machine - just a special type with feelings. If someone claims these feelings are illusions, ask them how they treat their friends or kids.
Thanks for the chance to weigh in.
Steve Deiss
UCSD I.N.C. (retired, but not done)
On Wed, May 31, 2023 at 4:13 AM Jeffrey L Krichmar <jkrichma at uci.edu<mailto:jkrichma at uci.edu>> wrote:
Dear Connectionists,
I am teaching an undergraduate course on “AI in Culture and Media”. Most students are in our Cognitive Sciences and Psychology programs. Last week we had a discussion and debate on AI, Consciousness, and Machine Ethics. After the debate, around 70 students filled out a survey responding to these questions.
Q1: Do you think it is possible to build conscious or sentient AI? 65% answered yes.
Q2: Do you think we should build conscious or sentient AI? 22% answered yes
Q3: Do you think AI should have rights? 54% answered yes
I thought many of you would find this interesting. And my students would like to hear your views on the topic.
Best regards,
Jeff Krichmar
Department of Cognitive Sciences
2328 Social & Behavioral Sciences Gateway
University of California, Irvine
Irvine, CA 92697-5100
jkrichma at uci.edu<mailto:jkrichma at uci.edu>
http://www.socsci.uci.edu/~jkrichma<https://urldefense.com/v3/__http:/www.socsci.uci.edu/*jkrichma__;fg!!Mih3wA!HDh_VR15vKKZHpLfS4NQNCBOnPXi4taXtClO-FkrXpNYzCK6sf3PtTd6GMYP_ZhCZaZ83AkV$>
https://www.penguinrandomhouse.com/books/716394/neurorobotics-by-tiffany-j-hwu-and-jeffrey-l-krichmar/<https://urldefense.com/v3/__https:/www.penguinrandomhouse.com/books/716394/neurorobotics-by-tiffany-j-hwu-and-jeffrey-l-krichmar/__;!!Mih3wA!HDh_VR15vKKZHpLfS4NQNCBOnPXi4taXtClO-FkrXpNYzCK6sf3PtTd6GMYP_ZhCZT4hUUls$>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230608/3d948968/attachment.html>
More information about the Connectionists
mailing list