Connectionists: From ChatGPT to Artificial General Intelligence: A Neural Network Model of How our Brains Learn Large Language Models and their Meanings
Grossberg, Stephen
steve at bu.edu
Wed Jul 30 09:42:30 EDT 2025
Dear Connectionists Colleagues,
I am happy to announce the publication of my article that describes a neural network model of how humans learn large language models and their meanings, while providing a blueprint of how to achieve Artificial General Intelligence.
These results show how to solve foundational problems of, and go beyond, models like Deep Learning and the LLMs promoted by OpenAI and other companies.
The article is:
Neural Network Models of Autonomous Adaptive Intelligence and Artificial General Intelligence: How our Brains Learn Large Language Models and their Meanings.
Frontiers in Systems Neuroscience, 29 July 2025, Volume 19
https://www.frontiersin.org/journals/systems-neuroscience/articles/10.3389/fnsys.2025.1630151/full
The article Abstract illustrates its scope:
“This article describes a biological neural network model that explains how humans learn to understand large language models and their meanings. This kind of learning typically occurs when a student learns from a teacher about events that they experience together. Multiple types of self-organizing brain processes are involved, including content-addressable memory; conscious visual perception; joint attention; object learning, categorization, and cognition; conscious recognition; cognitive working memory; cognitive planning; neural-symbolic computing; emotion; cognitive-emotional interactions and reinforcement learning; volition; and goal-oriented actions. The article advances earlier results showing how small language models are learned that have perceptual and affective meanings. The current article explains how humans, and neural network models thereof, learn to consciously see and recognize an unlimited number of visual scenes. Then, bi-directional associative links can be learned and stably remembered between these scenes, the emotions that they evoke, and the descriptive language utterances associated with them. Adaptive resonance theory circuits control model learning and self-stabilizing memory. These human capabilities are not found in AI models such as ChatGPT. The current model is called ChatSOME, where SOME abbreviates Self-Organizing MEaning. The article summarizes neural network highlights since the 1950s and leading models, including adaptive resonance, deep learning, LLMs, and transformers.”
Best to all,
Stephen Grossberg
sites.bu.edu/steveg
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20250730/a07a42ab/attachment.html>
More information about the Connectionists
mailing list