Connectionists: Can LLMs think?

Sai Chaitanya Gaddam chaitanyagsai at gmail.com
Fri Mar 24 13:11:02 EDT 2023


Mr. Cisek: Your comment made me think of Stephen Grossberg's work (no
surprise there I guess?).

This back and forth about "intelligence" and "thinking" makes me think of a
line I love, from an article by Steven Wise (
https://pubmed.ncbi.nlm.nih.gov/18835649/)

“The long list of functions often attributed to the prefrontal cortex could
contribute to knowing what to do and what will happen when rare risks arise
or outstanding opportunities knock.”

That’s a pretty good definition of what we recognize as intelligence too.
In particular, it is the focus on the rare and outstanding that is worth
attention. For an event to be rare or outstanding is to go against the
grain of regularity and structure. LLMs seem to be human-level or beyond at
getting to the structure, but this very focus on structure seems to also
make them bad with novelty – the whole stability-plasticity dilemma thing.

I wonder if this is an insurmountable task for LLMs given that at the heart
of it they are based on error minimization. Here again, I really like – it
resonates :) – Grossberg’s characterization of match-based excitatory
learning and mismatch-based inhibitory learning. Are prediction error
minimization models doomed to never remember in zero-shot rare and
outstanding novel situations (like all animals must)? Is this where
hallucination creeps in? I really do wish Grossberg’s ideas were better
known in the AI community. His book is a great place to start.

https://global.oup.com/academic/product/conscious-mind-resonant-brain-9780190070557



Sai Gaddam
+91 98457 69705
On LinkedIn <https://www.linkedin.com/in/saigaddamc>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230324/aa703bb2/attachment.html>


More information about the Connectionists mailing list