Connectionists: Weird beliefs about consciousness

Ali Minai minaiaa at gmail.com
Mon Feb 21 02:57:16 EST 2022


Hi Wlodek

Thanks for your very thought-provoking reply and the great reading
suggestions. We have known for a long time that the brain has both modal
and amodal representations of concepts. There is also evidence that
abstract concepts are built on the scaffolding of concrete ones (such as
directions and shapes in physical space), even in non-human animals. This
is just a conjecture but I think that the ability to build abstractions is
just meta-representation made possible by hierarchical depth with the
evolution of the cortex. So the representations such as hippocampal place
codes, built as a direct result of embodied experience, become "the world"
for higher layers of processing doing essentially the same thing but with a
different level of grounding - with abstract concepts as "place codes" in a
more abstract space. I call it "multi-level grounding". When you became
grounded at the level of group theory, your grounding in embodiment was
temporarily obscured because it was several level down. Of course, this is
hardly a new idea, but worth keeping in mind.

To go on a bit of a tangent but not unrelated, did you see this new paper:

https://arxiv.org/abs/2202.07206

Apparently, GPT-3 does rely more than people admit on regurgitation. I
don't think any language model build on the distributional hypothesis can
ever be sufficiently grounded to have "understanding", but  it should be
possible in highly formalized domains such as computer programming, where
the truth is so constrained and present wholly in the patterns of symbols.
Natura; language less so. Actual experience, hardly at all.

Ali




*Ali A. Minai, Ph.D.*
Professor and Graduate Program Director
Complex Adaptive Systems Lab
Department of Electrical Engineering & Computer Science
828 Rhodes Hall
University of Cincinnati
Cincinnati, OH 45221-0030

Past-President (2015-2016)
International Neural Network Society

Phone: (513) 556-4783
Fax: (513) 556-7326
Email: Ali.Minai at uc.edu
          minaiaa at gmail.com

WWW: https://eecs.ceas.uc.edu/~aminai/ <http://www.ece.uc.edu/%7Eaminai/>


On Fri, Feb 18, 2022 at 12:45 PM Wlodzislaw Duch <wduch at umk.pl> wrote:

> Ali,
>
> certainly for many people identification with "being like us" is important
> - this covers fertilized eggs and embryos, but not orangutans. John Locke
> wrote 300 years ago: "Consciousness is the perception of what passes in a
> Man's own mind". Physical states and processes that represent imagery, and
> the ability to create symbolic narratives describing what goes on inside
> cognitive system, should be the hallmark of consciousness. Of course more
> people will accept it if we put it in a baby robot -:)
>
> This is why I prefer to focus on a simple requirement: inner world and the
> ability to describe it.
> The road to create robots that can feel has been described by Kevin
> O'Regan in the book:
>
> O’Regan, J.K. (2011). Why Red Doesn’t Sound Like a Bell: Understanding the
> Feel of Consciousness. Oxford University Press, USA.
>
> Inner worlds may be based on different representations, not always deeply
> grounded in experience. Binder made a step toward a brain-based semantics:
> Binder, J. R., Conant, L. L., Humphries, C. J., Fernandino, L., Simons, S.
> B., Aguilar, M., & Desai, R. H. (2016). Toward a brain-based componential
> semantic representation. Cognitive Neuropsychology, 33(3–4), 130–174.
> Fernandino, L., Tong, J.-Q., Conant, L. L., Humphries, C. J., & Binder, J.
> R. (2022). Decoding the information structure underlying the neural
> representation of concepts. PNAS 119(6).
>
> This does not solve the symbol grounding problem (Harnad, 1990), but goes
> half the way, mimicking embodiment by decomposing symbolic concepts into
> attributes that are relevant to the brain. It should be sufficient to add
> human-like semantics to bots. As you mention yourself, embodiment could
> be more abstract, and I can imagine that a copy of a robot brain that has
> grounded its representations in interactions with environment will endow a
> new robot with similar experience. Can we simply implant it in the
> network?
>
> I wonder if absorption in abstract thinking can leave space for the use of
> experientially grounded concepts. I used to focus on group theory for hours
> and was not able to understand what was said to me for brief moments. Was I
> not conscious? Or should we consider continuous transition from abstract
> semantics to fully embodied, human-like semantics in artificial systems?
>
> Wlodek
>
>
> On 18/02/2022 16:36, Ali Minai wrote:
>
> Wlodek
>
> I think that the debate about consciousness in the strong sense of having
> a conscious experience like we do is sterile. We will never have a
> measuring device for whether another entity is "conscious", and at some
> point, we will get to an AI that is sufficiently complex in its observable
> behavior that we will either accept its inner state of consciousness on
> trust - just as we do with humans and other animals - or admit that we will
> never believe that a machine that is "not like us" can ever be conscious.
> The "like us" part is more important than many of us in the AI field think:
> A big part of why we believe other humans and our dogs are conscious is
> because we know that they are "like us", and assume that they must share
> our capacity for inner conscious experience. We already see this at a
> superficial level where, as ordinary humans, we have a much easier time
> identifying with an embodied, humanoid AI like Wall-E or the Terminator
> than with a disembodied one like Hal or Skynet. This is also why so many
> people find the Boston Dynamics "dog" so disconcerting.
>
> The question of embodiment is a complex one, as you know, of course, but I
> am with those who think that it is necessary for grounding mental
> representations - that it is the only way that the internal representations
> of the system are linked directly to its experience. For example, if an AI
> system trained only on text (like GPT-3) comes to learn that touching
> something hot results in the fact of getting burned, we cannot accept that
> as sufficient because it is based only on the juxtaposition of
> abstractions, not the actual painful experience of getting burned. For
> that, you need a body with sensors and a brain with a state corresponding
> to pain - something that can be done in an embodied robot. This is why I
> think that all language systems trained purely on the assumption of the
> distributional hypothesis of meaning will remain superficial; they lack the
> grounding that can only be supplied by experience. This does not mean that
> systems based on the distributional hypothesis cannot learn a lot, or even
> develop brain-like representations, as the following extremely interesting
> paper shows:
>
> Y. Zhang, K. Han, R. Worth, and Z. Liu. Connecting concepts in the brain
> by mapping cortical representations of semantic relations. Nature
> Communications, 11(1):1877, Apr 2020.
>
> In a formal sense, however, embodiment could be in any space, including
> very abstract ones. We can think of text data as GPT-3's world and, in that
> world, it is "embodied" and its fundamentally distributional learning,
> though superficial and lacking in experience to us, is grounded for it
> within its world. Of course, this is not a very useful view of embodiment
> and grounding since we want to create AI that is grounded in our sense, but
> one of the most under-appreciated risks of AI is that, as we develop
> systems that live in worlds very different than ours, they will -
> implicitly and emergently - embody values completely alien to us. The
> proverbial loan-processing AI that learns to be racially biased in just a
> caricature of this hazard, but one that should alert us to deeper issues.
> Our quaintly positivistic and reductionistic notion that we can deal with
> such things by removing biases from data, algorithms, etc., is misplaced.
> The world is too complicated for that.
>
> Ali
>
> *Ali A. Minai, Ph.D.*
> Professor and Graduate Program Director
> Complex Adaptive Systems Lab
> Department of Electrical Engineering & Computer Science
> 828 Rhodes Hall
> University of Cincinnati
> Cincinnati, OH 45221-0030
>
> Phone: (513) 556-4783
> Fax: (513) 556-7326
> Email: Ali.Minai at uc.edu
>           minaiaa at gmail.com
>
> WWW: https://eecs.ceas.uc.edu/~aminai/ <http://www.ece.uc.edu/%7Eaminai/>
>
>
> On Fri, Feb 18, 2022 at 7:27 AM Wlodzislaw Duch <wduch at umk.pl> wrote:
>
>> Asim,
>>
>> I was on the Anchorage panel, and asked others what could be a great
>> achievement in computational intelligence. Steve Grossberg replied, that
>> symbolic AI is meaningless, but creation of artificial rat that could
>> survive in hostile environment would be something. Of course this is still
>> difficult, but perhaps DARPA autonomous machines are not that far?
>>
>> I also had similar discussions with Walter and support his position: you
>> cannot separate tightly coupled systems. Any external influence will create
>> activation in both, linear causality looses its meaning. This is clear if
>> both systems adjust to each other. But even if only one system learns
>> (brain) and the other is mechanical but responds to human actions it may
>> behave as one system. Every musician knows that: piano becomes a part of
>> our body, responding in so many ways to actions, not only by producing
>> sounds but also providing haptic feedback.
>>
>> This simply means that brains of locked-in people worked in somehow
>> different way than brains of healthy people. Why do we consider them
>> conscious? Because they can reflect on their mind states, imagine things
>> and describe their inner states. If GPT-3 was coupled with something like
>> DALL-E that creates images from text, and could describe what they see in
>> their inner world, create some kind of episodic memory, we would have hard
>> time to deny that this thing is not conscious of what it has in its mind.
>> Embodiment helps to create inner world and changes it, but it is not
>> necessary for consciousness. Can we find a good argument that such system
>> is not conscious of its own states? It may not have all qualities of human
>> consciousness, but that is a matter of more detailed approximation of
>> missing functions.
>>
>> I have made this argument a long time ago (ex. in "*Brain-inspired
>> conscious computing architecture" *written over 20 years ago, see more
>> papers on this on my web page).
>>
>> Wlodek
>>
>> Prof. Włodzisław Duch
>> Fellow, International Neural Network Society
>> Past President, European Neural Network Society
>> Head, Neurocognitive Laboratory, CMIT NCU, Poland
>>
>> Google: Wlodzislaw Duch <http://www.google.com/search?q=Wlodek+Duch>
>>
>> On 18/02/2022 05:22, Asim Roy wrote:
>>
>> In 1998, after our debate about the brain at the WCCI in Anchorage,
>> Alaska, I asked Walter Freeman if he thought the brain controls the body.
>> His answer was, you can also say that the body controls the brain. I then
>> asked him if the driver controls a car, or the pilot controls an airplane.
>> His answer was the same, that you can also say that the car controls the
>> driver, or the plane controls the pilot. I then realized that Walter was
>> also a philosopher and believed in the No-free Will theory and what he was
>> arguing for is that the world is simply made of interacting systems.
>> However, both Walter, and his close friend John Taylor, were into
>> consciousness.
>>
>>
>>
>> I have argued with Walter on many different topics over nearly two
>> decades and have utmost respect for him as a scholar, but this first
>> argument I will always remember.
>>
>>
>>
>> Obviously, there’s a conflict between consciousness and the No-free Will
>> theory. Wonder where we stand with regard to this conflict.
>>
>>
>>
>> Asim Roy
>>
>> Professor, Information Systems
>>
>> Arizona State University
>>
>> Lifeboat Foundation Bios: Professor Asim Roy
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e=>
>>
>> Asim Roy | iSearch (asu.edu)
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e=>
>>
>>
>>
>>
>>
>> *From:* Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu>
>> <connectionists-bounces at mailman.srv.cs.cmu.edu> *On Behalf Of *Andras
>> Lorincz
>> *Sent:* Tuesday, February 15, 2022 6:50 AM
>> *To:* Stephen José Hanson <jose at rubic.rutgers.edu>
>> <jose at rubic.rutgers.edu>; Gary Marcus <gary.marcus at nyu.edu>
>> <gary.marcus at nyu.edu>
>> *Cc:* Connectionists <Connectionists at cs.cmu.edu>
>> <Connectionists at cs.cmu.edu>
>> *Subject:* Re: Connectionists: Weird beliefs about consciousness
>>
>>
>>
>> Dear Steve and Gary:
>>
>> This is how I see (try to understand) consciousness and the related
>> terms:
>>
>> (Our) consciousness seems to be related to the close-to-deterministic
>> nature of the episodes on from few hundred millisecond to a few second
>> domain. Control instructions may leave our brain 200 ms earlier than the
>> action starts and they become conscious only by that time. In addition,
>> observations of those may also be delayed by a similar amount. (It then
>> follows that the launching of the control actions is not conscious and --
>> therefore -- free will can be debated in this very limited context.) On the
>> other hand, model-based synchronization is necessary for timely
>> observation, planning, decision making, and execution in a distributed and
>> slow computational system. If this model-based synchronization is not
>> working properly, then the observation of the world breaks and
>> schizophrenic symptoms appear. As an example, individuals with pronounced
>> schizotypal traits are particularly successful in self-tickling (source:
>> https://philpapers.org/rec/LEMIWP
>> <https://urldefense.com/v3/__https:/philpapers.org/rec/LEMIWP__;!!IKRxdwAv5BmarQ!P1ufmU5XnzpvjxtS2M0AnytlX24RNsoDeNPfsqUNWbF6OU5p9xMqtMj9S3Pn3cY$>,
>> and a discussion on Asperger and schizophrenia:
>> https://www.frontiersin.org/articles/10.3389/fpsyt.2020.503462/full
>> <https://urldefense.com/v3/__https:/www.frontiersin.org/articles/10.3389/fpsyt.2020.503462/full__;!!IKRxdwAv5BmarQ!P1ufmU5XnzpvjxtS2M0AnytlX24RNsoDeNPfsqUNWbF6OU5p9xMqtMj9l5NkQt4$>)
>> a manifestation of improper binding. The internal model enables and the
>> synchronization requires the internal model and thus a certain level of
>> consciousness can appear in a time interval around the actual time instant
>> and its length depends on the short-term memory.
>>
>> Other issues, like separating the self from the rest of the world are
>> more closely related to the soft/hard style interventions (as called in the
>> recent deep learning literature), i.e., those components (features) that
>> can be modified/controlled, e.g., color and speed, and the ones that are
>> Lego-like and can be separated/amputed/occluded/added.
>>
>> Best,
>>
>> Andras
>>
>>
>>
>> ------------------------------------
>>
>> Andras Lorincz
>>
>> http://nipg.inf.elte.hu/
>> <https://urldefense.com/v3/__http:/nipg.inf.elte.hu/__;!!IKRxdwAv5BmarQ!P1ufmU5XnzpvjxtS2M0AnytlX24RNsoDeNPfsqUNWbF6OU5p9xMqtMj9j2LbdH0$>
>>
>> Fellow of the European Association for Artificial Intelligence
>>
>> https://scholar.google.com/citations?user=EjETXQkAAAAJ&hl=en
>> <https://urldefense.com/v3/__https:/scholar.google.com/citations?user=EjETXQkAAAAJ&hl=en__;!!IKRxdwAv5BmarQ!P1ufmU5XnzpvjxtS2M0AnytlX24RNsoDeNPfsqUNWbF6OU5p9xMqtMj99i1VRm0$>
>>
>> Department of Artificial Intelligence
>>
>> Faculty of Informatics
>>
>> Eotvos Lorand University
>>
>> Budapest, Hungary
>>
>>
>>
>>
>>
>>
>> ------------------------------
>>
>> *From:* Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu>
>> on behalf of Stephen José Hanson <jose at rubic.rutgers.edu>
>> *Sent:* Monday, February 14, 2022 8:30 PM
>> *To:* Gary Marcus <gary.marcus at nyu.edu>
>> *Cc:* Connectionists <connectionists at cs.cmu.edu>
>> *Subject:* Re: Connectionists: Weird beliefs about consciousness
>>
>>
>>
>> Gary,  these weren't criterion.     Let me try again.
>>
>> I wasn't talking about wake-sleep cycles... I was talking about being
>> awake or asleep and the transition that ensues..
>>
>> Rooba's don't sleep.. they turn off, I have two of them.  They turn on
>> once (1) their batteries are recharged (2) a timer has been set for being
>> turned on.
>>
>> GPT3 is essentially a CYC that actually works.. by reading Wikipedia
>> (which of course is a terribly biased sample).
>>
>> I was indicating the difference between implicit and explicit
>> learning/problem solving.    Implicit learning/memory is unconscious and
>> similar to a habit.. (good or bad).
>>
>> I believe that when someone says "is gpt3 conscious?"  they are asking:
>> is gpt3 self-aware?      Roombas know about vacuuming and they are
>> unconscious.
>>
>> S
>>
>> On 2/14/22 12:45 PM, Gary Marcus wrote:
>>
>> Stephen,
>>
>>
>>
>> On criteria (1)-(3), a high-end, mapping-equippped Roomba is far more
>> plausible as a consciousness than GPT-3.
>>
>>
>>
>> 1. The Roomba has a clearly defined wake-sleep cycle; GPT does not.
>>
>> 2. Roomba makes choices based on an explicit representation of its
>> location relative to a mapped space. GPT lacks any consistent reflection of
>> self; eg if you ask it, as I have, if you are you person, and then ask if
>> it is a computer, it’s liable to say yes to both, showing no stable
>> knowledge of self.
>>
>> 3. Roomba has explicit, declarative knowledge eg of walls and other
>> boundaries, as well its own location. GPT has no systematically
>> interrogable explicit representations.
>>
>>
>>
>> All this is said with tongue lodged partway in cheek, but I honestly
>> don’t see what criterion would lead anyone to believe that GPT is a more
>> plausible candidate for consciousness than any other AI program out there.
>>
>>
>>
>> ELIZA long ago showed that you could produce fluent speech that was
>> mildly contextually relevant, and even convincing to the untutored; just
>> because GPT is a better version of that trick doesn’t mean it’s any more
>> conscious.
>>
>>
>>
>> Gary
>>
>>
>>
>> On Feb 14, 2022, at 08:56, Stephen José Hanson <jose at rubic.rutgers.edu>
>> <jose at rubic.rutgers.edu> wrote:
>>
>> 
>>
>> this is a great list of behavior..
>>
>> Some biologically might be termed reflexive, taxes, classically
>> conditioned, implicit (memory/learning)... all however would not be
>> conscious in the several senses:  (1)  wakefulness-- sleep  (2) self
>> aware (3) explicit/declarative.
>>
>> I think the term is used very loosely, and I believe what GPT3 and other
>> AI are hoping to show signs of is "self-awareness"..
>>
>> In response to :  "why are you doing that?",  "What are you doing now",
>> "what will you be doing in 2030?"
>>
>> Steve
>>
>>
>>
>> On 2/14/22 10:46 AM, Iam Palatnik wrote:
>>
>> A somewhat related question, just out of curiosity.
>>
>>
>>
>> Imagine the following:
>>
>>
>>
>> - An automatic solar panel that tracks the position of the sun.
>>
>> - A group of single celled microbes with phototaxis that follow the
>> sunlight.
>>
>> - A jellyfish (animal without a brain) that follows/avoids the sunlight.
>>
>> - A cockroach (animal with a brain) that avoids the sunlight.
>>
>> - A drone with onboard AI that flies to regions of more intense sunlight
>> to recharge its batteries.
>>
>> - A human that dislikes sunlight and actively avoids it.
>>
>>
>>
>> Can any of these, beside the human, be said to be aware or conscious of
>> the sunlight, and why?
>>
>> What is most relevant? Being a biological life form, having a brain,
>> being able to make decisions based on the environment? Being taxonomically
>> close to humans?
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Mon, Feb 14, 2022 at 12:06 PM Gary Marcus <gary.marcus at nyu.edu> wrote:
>>
>> Also true: Many AI researchers are very unclear about what consciousness
>> is and also very sure that ELIZA doesn’t have it.
>>
>> Neither ELIZA nor GPT-3 have
>> - anything remotely related to embodiment
>> - any capacity to reflect upon themselves
>>
>> Hypothesis: neither keyword matching nor tensor manipulation, even at
>> scale, suffice in themselves to qualify for consciousness.
>>
>> - Gary
>>
>> > On Feb 14, 2022, at 00:24, Geoffrey Hinton <geoffrey.hinton at gmail.com>
>> wrote:
>> >
>> > Many AI researchers are very unclear about what consciousness is and
>> also very sure that GPT-3 doesn’t have it. It’s a strange combination.
>> >
>> >
>>
>> --
>>
>>
>> --
> Prof. Włodzisław Duch
> Fellow, International Neural Network Society
> Past President, European Neural Network Society
> Head, Neurocognitive Laboratory, CMIT NCU, Poland
> Google: Wlodzislaw Duch <http://www.google.com/search?q=Wlodek+Duch>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220221/1127ba81/attachment.html>


More information about the Connectionists mailing list