Connectionists: Statistics versus “Understanding” in Generative AI.

Luis Lamb lamb at inf.ufrgs.br
Tue Feb 20 09:06:05 EST 2024


What many (or some) have thought relevant in computer science is knowing rigorously what a computing engine is computing (learning or reasoning). Dana Scott, ACM Turing Award Winner (1976) dedicated most of his career
at Stanford-Oxford-CMU to that. It seems to me that we are lacking in semantic understanding in current LLMs.

Extending and understanding a notion made explicit by Harnad (in the symbol grounding paper) to neural computing and its hybridization through neurosymbolic AI, I always thought of a Harnad style analogy
between symbol grounding and "neural grounding (sic)".

What is/are the meaning(s) of large neural networks (Transformer
like LLMs) "computations" which are manipulated based only on their
(initial) syntactic structure?
This is one of my (and of many others) motivation towards neurosymbolic AI. The need for better (or "a") formal semantics. The benefits would be possibly many in explainability and other predicates now under attention.
Luis

> On Feb 20, 2024, at 01:38, Gary Marcus <gary.marcus at nyu.edu> wrote:
> 
> This dependency on exact formulation is literally why I think they don’t understand much.
> 
> An expert, who understands something deeply, can appreciate it in a variety of presentations; a novice fails to recognize slight transformations as the same problem, because they don’t really understand what’s going on.
> 
>> On Feb 19, 2024, at 11:08, Iam Palatnik <iam.palat at gmail.com> wrote:
>> 
>> Because the performance of the LLMs on some of these tests seem to depend so much on how the questions are formulated and what tools they are given to respond with, I still tend to think that they understand something
> 




More information about the Connectionists mailing list