Connectionists: Consciousness, chatGPT, and Grossberg's "Conscious Mind, Resonant Brain"
www.BillHowell.ca
Bill at BillHowell.ca
Thu Oct 26 15:49:57 EDT 2023
I am at the initial stages of going through Stephen Grossberg's 2021 book "Conscious Mind, Resonant Brain". One
thing that struck me last March-April2023 was a very vague resemblance of the basic "building block" of Transformer
Neural Networks (TrNNs) and some of Grossberg's work dating back to 1972-99, and greatly extended since then.
Granted, TrNNs lack essential features of Grossberg's architectures, so I'm not expecting a lot from them. Perhaps
the large-scale TrNNs
Three questions that came to mind back in March-April were : 1. Do "Large Language Models (LLMs) (such as chatGPT,
LaMDA, etc) already exhibit a [protero, incipient]
consciousness, in particular given the rough similarity of the basic unit of "Transformer Neural Networks"
(TrNNs) to one of Grossberg's general "modules". The latter are proposed as a small number of units that are
readily recombined with slight modifications as a basis for much of brain architecture, much like the small
number of concepts in physics can be applied across a broad range of themes? TrNN experts were categorical in
emphasizing that sentience (here I am using that term as being "somewhat similar" to consciousness) was not
built in (cannot arise), but might there be something more to this?
2. How difficult would it be to augment "Transformer Neural Networks" (TrNNs) with Grossberg's [concept,
architecture]s, including the emergent systems for consciousness? Perhaps this would combine the scalability
of the former with the [robust, extendable] foundations of the latter, which is supported by [broad, diverse,
deep] :
data from [neuroscience, psychology]
success in real world advanced [science, engineering] applications
decades of pioneering work in the mathematical basis of Neural Networks (non-linear networks, Adaptive
Resonance Theory, neural [architecture, function, process]s, etc)
3. Are current (semi-manual) "controls" of "Large Language Models (LLMs) going in the direction of machine
consciousness, without those involved being aware of this? Will "controls" ultimately require machine
consciousness as one of their components, in particular for [learning, evolution] in a stable and robust
manner?
Do you have any thoughts on this? I still need to do my homework on those questions, but only after I get further
into the book, and build (hopefully) some Spiking Neural Network (SNN) code to implement a few basic ideas from
Grossberg. Gail Carpenter has done some work with SNNs, so I am looking forward to reading through that.
webSite with details about the book: perhaps helpful for readers and those considering its purchase
For my own use as an aid to going through the book, I spent considerable time to build : * directories containing
[html caption, image]s
* a webPage of "Themes" created from the captions (searchable)
* a few webPages to collect information related to consciousness, but more importantly providing some detail about
the overall content of Grossberg's book.
I felt that this may be helpful for others who are reading through the book. I use it especially to open multiple
images at a time for easy comparison, and will build a siple script to do that automatically for selected themes.
Having permission to post images from Grossberg's book to my webSite, I have put up a very early version so others
might benefit. Here are some quick links : * list of [figure, table]s, with links to view captioned figures and
tables
* a simple menu listing webPages on the webSite
* initial list of a variety of themes from the book, currently generated by a simple bash script (needs
improvement)
All of the above are in a very [initial, incomplete] stage, and it will take months to progress. I am actually more
interested in the "non-conscious" part of Grossberg's work, the lessons from that, and how that ties in with
consciousness, than I am with the theme of consciousness alone. Grossberg does the best work that I know of for
that. Furthermore, some of my priorities actually go in the other direction : * what I call callerID-SNNs ("what
if", not even a hypothesis yet)
* [Mendellian, Lamarckian] heredity questions.
Mr. Bill Howell, Bill at BillHowell.ca 1-587-707-2027
http://www.BillHowell.ca/ (browser shows root directory of webSite)
http://www.BillHowell.ca/home.html (start webPage browsing)
P.O. Box 299, Hussar, Alberta, T0J1S0
member - International Neural Network Society (INNS), IEEE Computational Intelligence Society (IEEE-CIS),
WCCI2020 Glasgow, Publicity Chair mass emails, http://wcci2020.org/
Retired 2012: Science Research Manager (SE-REM-01) at Natural Resources Canada, CanmetMINING, Ottawa
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20231026/40eff8d6/attachment.html>
More information about the Connectionists
mailing list