<html><head></head><body><div>I agree, but I still have a lot of work to do to finish the book (after years of listening to his presentations) and build Grossberg ART & other models. It is a very different perspective, with a serious attempt to build in biology, neuroscience. Perhaps there are deep lessons here?</div><div><br></div><div><br></div><div>Mr. Bill Howell </div><div>1-587-707-2027 <a href="mailto:Bill@BillHowell.ca">Bill@BillHowell.ca</a> </div><div></div><div>member - International Neural Network Society (INNS), IEEE Computational Intelligence Society (IEEE-CIS),</div><div><br></div><div><br></div><div><br></div><div>-------- Forwarded Message --------</div><div><b>From</b>: Sai Chaitanya Gaddam <<a href="mailto:Sai%20Chaitanya%20Gaddam%20%3cchaitanyagsai@gmail.com%3e">chaitanyagsai@gmail.com</a>></div><div><b>To</b>: <a href="mailto:connectionists@mailman.srv.cs.cmu.edu">connectionists@mailman.srv.cs.cmu.edu</a></div><div><b>Subject</b>: Re: Connectionists: Can LLMs think?</div><div><b>Date</b>: Fri, 24 Mar 2023 22:41:02 +0530</div><div><br></div><!-- text/html --><div dir="ltr"><div><span id="gmail-docs-internal-guid-8863aa24-7fff-ed62-594c-8365f16cc331"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Mr. Cisek: Your comment made me think of Stephen Grossberg's work (no surprise there I guess?).</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">This back and forth about "intelligence" and "thinking" makes me think of a line I love, from an article by Steven Wise (</span><a href="https://pubmed.ncbi.nlm.nih.gov/18835649/" style="text-decoration-line:none"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">https://pubmed.ncbi.nlm.nih.gov/18835649/</span></a><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">)</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">“The long list of functions often attributed to the prefrontal cortex could contribute to knowing what to do and what will happen when rare risks arise or outstanding opportunities knock.” </span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">That’s a pretty good definition of what we recognize as intelligence too. In particular, it is the focus on the rare and outstanding that is worth attention. For an event to be rare or outstanding is to go against the grain of regularity and structure. LLMs seem to be human-level or beyond at getting to the structure, but this very focus on structure seems to also make them bad with novelty – the whole stability-plasticity dilemma thing.</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">I wonder if this is an insurmountable task for LLMs given that at the heart of it they are based on error minimization. Here again, I really like – it resonates :) – Grossberg’s characterization of match-based excitatory learning and mismatch-based inhibitory learning. Are prediction error minimization models doomed to never remember in zero-shot rare and outstanding novel situations (like all animals must)? Is this where hallucination creeps in? I really do wish Grossberg’s ideas were better known in the AI community. His book is a great place to start.</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><a href="https://global.oup.com/academic/product/conscious-mind-resonant-brain-9780190070557" style="text-decoration-line:none"><span style="font-size:11pt;font-family:Arial;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">https://global.oup.com/academic/product/conscious-mind-resonant-brain-9780190070557</span></a></p></span><br class="gmail-Apple-interchange-newline"></div><div><br></div><div><br></div><div><span style="font-size:12.8px">Sai Gaddam</span></div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div style="font-size:12.8px"><span style="font-size:12.8px">+91 98457 69705</span></div><div style="font-size:12.8px"><a href="https://www.linkedin.com/in/saigaddamc" style="color:rgb(17,85,204)" target="_blank">On LinkedIn</a></div></div></div></div></div></div></div></div><div><br></div><div><span></span></div></body></html>