Connectionists: Statistics versus “Understanding” in Generative AI.

Ali Minai minaiaa at gmail.com
Tue Feb 20 10:54:13 EST 2024


It is absurd to claim humanlike “understanding” in an ungrounded system
with no introspective ability, no internal recurrence, no hierarchical
working memory, no internal drives or affective states, no foresight, no
real-time adaptation, and purely autoregressive response. The fact that
careful prompting can improve performance dramatically shows how much a
user needs to make up for the absence of basic cognitive mechanisms.

All this said, LLMs - especially GPT-4 - are truly remarkable and should
force us to reconsider many of our received ideas about the nature of
cognition and language. Its hallucinations and failure to obey very simple
instructions (as in Dave’s example) notwithstanding, ChatGPT can often
respond with such apparent depth and nuance that you have to sit back and
wonder.

The “just statistics” argument is, in my opinion, not a good one. The
problem with ChatGPT, Sora, et al is not that they build their internal
world model from statistics, but that they do so from statistics of text
and video, which represent the real world indirectly, superficially,
ambiguously, and - in the case of text - often inaccurately. Thus, they are
modeling the worlds of text and video, not the real world. They do it using
architectures that are extremely generic and simplistic - especially when
compared to brains. They also do not exploit the incredibly useful prior
biases (e.g., specific neural circuits) that evolution provides in animals,
and do not learn developmentally. If one had a more sophisticated -
animal-like or other - autonomous system with productive innate biases,
self-motivation, representational depth, and the ability to learn “just
statistics” by experiencing the real world, it could have natural
understanding.

If we ever do get around to building such systems, we’ll need to think of
them as we do of humans and other animals. They will be able to do very
useful things for us, but by choice, not compulsion. They will also be able
to disobey, deceive, and act sociopathically - even psychopathically. We
accept these things in our fellow humans and pets, and should do the same
in any autonomous general intelligence we create. The ideal AGI system - if
we ever build one - will have its own mind; else it is not a general
intelligence but a glorified tool.

Ali




*Ali A. Minai, Ph.D.*
Professor and Graduate Program Director
Complex Adaptive Systems Lab
Department of Electrical & Computer Engineering
828 Rhodes Hall
University of Cincinnati
Cincinnati, OH 45221-0030

Phone: (513) 556-4783
Fax: (513) 556-7326
Email: Ali.Minai at uc.edu
          minaiaa at gmail.com

WWW: https://researchdirectory.uc.edu/p/minaiaa
<http://www.ece.uc.edu/%7Eaminai/>


On Tue, Feb 20, 2024 at 9:37 AM Luis Lamb <lamb at inf.ufrgs.br> wrote:

> What many (or some) have thought relevant in computer science is knowing
> rigorously what a computing engine is computing (learning or reasoning).
> Dana Scott, ACM Turing Award Winner (1976) dedicated most of his career
> at Stanford-Oxford-CMU to that. It seems to me that we are lacking in
> semantic understanding in current LLMs.
>
> Extending and understanding a notion made explicit by Harnad (in the
> symbol grounding paper) to neural computing and its hybridization through
> neurosymbolic AI, I always thought of a Harnad style analogy
> between symbol grounding and "neural grounding (sic)".
>
> What is/are the meaning(s) of large neural networks (Transformer
> like LLMs) "computations" which are manipulated based only on their
> (initial) syntactic structure?
> This is one of my (and of many others) motivation towards neurosymbolic
> AI. The need for better (or "a") formal semantics. The benefits would be
> possibly many in explainability and other predicates now under attention.
> Luis
>
> > On Feb 20, 2024, at 01:38, Gary Marcus <gary.marcus at nyu.edu> wrote:
> >
> > This dependency on exact formulation is literally why I think they
> don’t understand much.
> >
> > An expert, who understands something deeply, can appreciate it in a
> variety of presentations; a novice fails to recognize slight
> transformations as the same problem, because they don’t really understand
> what’s going on.
> >
> >> On Feb 19, 2024, at 11:08, Iam Palatnik <iam.palat at gmail.com> wrote:
> >>
> >> Because the performance of the LLMs on some of these tests seem to
> depend so much on how the questions are formulated and what tools they are
> given to respond with, I still tend to think that they understand something
> >
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20240220/ec57e1d5/attachment.html>


More information about the Connectionists mailing list