Connectionists: Statistics versus “Understanding” in Generative AI.

Roshini Johri roshini.johri at gmail.com
Mon Feb 19 11:02:14 EST 2024


I agree with Hypothesis 1 as false with the caveat that the answer to the
question: what does understanding language mean itself is really
contentious, different and also cultural. I believe LLMs understand the
structure of language and the logic underneath it including mapping
structures to each other (in purely mathematical functional terms)
. I don't believe LLMs 'understand' the significance of it in a way humans
do. They are just learning a different type of mathematical
interpretation of something we associate a deeper meaning to.

Hypothesis 2 : Yes this is going to be exciting as it grows to become a
part of something much bigger. 100% agree that is a component of a system
that will evolve to be more complex with different
reasoning/planning capabilities that hasn't really come together yet.

On Mon, Feb 19, 2024 at 3:49 PM Gary Marcus <gary.marcus at nyu.edu> wrote:

> Hypothesis 1: LLMs (deeply) understand language: false
>
> Hypothesis 2: LLMs could play an important part of a larger, modular
> systems, perhaps neurosymbolic, and perhaps partly prestructured prior to
> learning, that could conceivably eventually deeply understand language:
> open for investigation
>
> > On Feb 19, 2024, at 6:50 AM, Iam Palatnik <iam.palat at gmail.com> wrote:
> >
> > I definitely feel like the [LLM + function-call + code-interpreter +
> external-source search
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20240219/87d1dc8d/attachment.html>


More information about the Connectionists mailing list