Connectionists: Can LLMs think?

www.BillHowell.ca Bill at BillHowell.ca
Sat Mar 25 22:12:02 EDT 2023


I agree, but I still have a lot of work to do to finish the book (after years of listening to his presentations)
and build Grossberg ART & other models.  It is a very different perspective, with a serious attempt to build in
biology, neuroscience.  Perhaps there are deep lessons here?


Mr. Bill Howell 
1-587-707-2027 Bill at BillHowell.ca
member - International Neural Network Society (INNS), IEEE Computational Intelligence Society (IEEE-CIS),



-------- Forwarded Message --------
From: Sai Chaitanya Gaddam <chaitanyagsai at gmail.com>
To: connectionists at mailman.srv.cs.cmu.edu
Subject: Re: Connectionists: Can LLMs think?
Date: Fri, 24 Mar 2023 22:41:02 +0530

Mr. Cisek: Your comment made me think of Stephen Grossberg's work (no surprise there I guess?).

This back and forth about "intelligence" and "thinking" makes me think of a line I love, from an article by Steven
Wise (https://pubmed.ncbi.nlm.nih.gov/18835649/)

“The long list of functions often attributed to the prefrontal cortex could contribute to knowing what to do and
what will happen when rare risks arise or outstanding opportunities knock.” 

That’s a pretty good definition of what we recognize as intelligence too. In particular, it is the focus on the
rare and outstanding that is worth attention. For an event to be rare or outstanding is to go against the grain of
regularity and structure. LLMs seem to be human-level or beyond at getting to the structure, but this very focus on
structure seems to also make them bad with novelty – the whole stability-plasticity dilemma thing.

I wonder if this is an insurmountable task for LLMs given that at the heart of it they are based on error
minimization. Here again, I really like – it resonates :) – Grossberg’s characterization of match-based excitatory
learning and mismatch-based inhibitory learning. Are prediction error minimization models doomed to never remember
in zero-shot rare and outstanding novel situations (like all animals must)? Is this where hallucination creeps in?
I really do wish Grossberg’s ideas were better known in the AI community. His book is a great place to start.

https://global.oup.com/academic/product/conscious-mind-resonant-brain-9780190070557



Sai Gaddam
+91 98457 69705
On LinkedIn

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230325/b99d6d29/attachment.html>


More information about the Connectionists mailing list