<div dir="auto">It is absurd to claim humanlike “understanding” in an ungrounded system with no introspective ability, no internal recurrence, no hierarchical working memory, no internal drives or affective states, no foresight, no real-time adaptation, and purely autoregressive response. The fact that careful prompting can improve performance dramatically shows how much a user needs to make up for the absence of basic cognitive mechanisms. </div><div dir="auto"><br></div><div dir="auto">All this said, LLMs - especially GPT-4 - are truly remarkable and should force us to reconsider many of our received ideas about the nature of cognition and language. Its hallucinations and failure to obey very simple instructions (as in Dave’s example) notwithstanding, ChatGPT can often respond with such apparent depth and nuance that you have to sit back and wonder.</div><div dir="auto"><br></div><div dir="auto">The “just statistics” argument is, in my opinion, not a good one. The problem with ChatGPT, Sora, et al is not that they build their internal world model from statistics, but that they do so from statistics of text and video, which represent the real world indirectly, superficially, ambiguously, and - in the case of text - often inaccurately. Thus, they are modeling the worlds of text and video, not the real world. They do it using architectures that are extremely generic and simplistic - especially when compared to brains. They also do not exploit the incredibly useful prior biases (e.g., specific neural circuits) that evolution provides in animals, and do not learn developmentally. If one had a more sophisticated - animal-like or other - autonomous system with productive innate biases, self-motivation, representational depth, and the ability to learn “just statistics” by experiencing the real world, it could have natural understanding. </div><div dir="auto"><br></div><div dir="auto">If we ever do get around to building such systems, we’ll need to think of them as we do of humans and other animals. They will be able to do very useful things for us, but by choice, not compulsion. They will also be able to disobey, deceive, and act sociopathically - even psychopathically. We accept these things in our fellow humans and pets, and should do the same in any autonomous general intelligence we create. The ideal AGI system - if we ever build one - will have its own mind; else it is not a general intelligence but a glorified tool. </div><div dir="auto"><br></div><div dir="auto">Ali</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><br clear="all"><br clear="all"><div dir="auto"><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><b>Ali A. Minai, Ph.D.</b><br>Professor and Graduate Program Director<br>Complex Adaptive Systems Lab<br>Department of Electrical & Computer Engineering<br></div><div>828 Rhodes Hall<br></div><div>University of Cincinnati<br>Cincinnati, OH 45221-0030</div><div><br>Phone: (513) 556-4783<br>Fax: (513) 556-7326<br>Email: <a href="mailto:Ali.Minai@uc.edu" target="_blank">Ali.Minai@uc.edu</a><br> <a href="mailto:minaiaa@gmail.com" target="_blank">minaiaa@gmail.com</a><br><br>WWW: <a href="http://www.ece.uc.edu/%7Eaminai/" target="_blank">https://researchdirectory.uc.edu/p/minaiaa</a></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div><div><br></div><div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Feb 20, 2024 at 9:37 AM Luis Lamb <<a href="mailto:lamb@inf.ufrgs.br">lamb@inf.ufrgs.br</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;padding-left:1ex;border-left-color:rgb(204,204,204)">What many (or some) have thought relevant in computer science is knowing rigorously what a computing engine is computing (learning or reasoning). Dana Scott, ACM Turing Award Winner (1976) dedicated most of his career<br>
at Stanford-Oxford-CMU to that. It seems to me that we are lacking in semantic understanding in current LLMs.<br>
<br>
Extending and understanding a notion made explicit by Harnad (in the symbol grounding paper) to neural computing and its hybridization through neurosymbolic AI, I always thought of a Harnad style analogy<br>
between symbol grounding and "neural grounding (sic)".<br>
<br>
What is/are the meaning(s) of large neural networks (Transformer<br>
like LLMs) "computations" which are manipulated based only on their<br>
(initial) syntactic structure?<br>
This is one of my (and of many others) motivation towards neurosymbolic AI. The need for better (or "a") formal semantics. The benefits would be possibly many in explainability and other predicates now under attention.<br>
Luis<br>
<br>
> On Feb 20, 2024, at 01:38, Gary Marcus <<a href="mailto:gary.marcus@nyu.edu" target="_blank">gary.marcus@nyu.edu</a>> wrote:<br>
> <br>
> This dependency on exact formulation is literally why I think they don’t understand much.<br>
> <br>
> An expert, who understands something deeply, can appreciate it in a variety of presentations; a novice fails to recognize slight transformations as the same problem, because they don’t really understand what’s going on.<br>
> <br>
>> On Feb 19, 2024, at 11:08, Iam Palatnik <<a href="mailto:iam.palat@gmail.com" target="_blank">iam.palat@gmail.com</a>> wrote:<br>
>> <br>
>> Because the performance of the LLMs on some of these tests seem to depend so much on how the questions are formulated and what tools they are given to respond with, I still tend to think that they understand something<br>
> <br>
<br>
<br>
</blockquote></div></div>