Connectionists: Early history of symbolic and neural network approaches to AI

Weng, Juyang weng at msu.edu
Sun Feb 25 17:48:59 EST 2024


Dear Steve,
    Thank you for your detailed response.
   To focus on my three key questions, I respond to your comments below by clarifying the three questions.
   (1)  You agreed that the grandmother cell does not explain how to learn invariances, such as location invariance, scale invariance, and orientational invariance.   I object to the grandmother cell idea because it is a symbolic concept.
    (a) You wrote you glanced at our Cresceptron paper. It says: “the user manually draws a polygon outlining the region of interest and types in the label of its class….”  Contrary to what you wrote about Cresceptron, Cresceptron is a real-time and incremental learning algorithm.   Cresceptron was the first neural network that learns incrementally from natural and cluttered scenes, with my image annotation using a polygon.   Its polygon idea was stolen by ImageNet (without citing) and ImageNet simplified my polygon to a rectangle.
    (b) Carpenter, Grossber, Reynolds, ARTMAP 1991 wrote, "on a trial and error basis using only local operations".  Does your ARTMAP algorithm step (A.1) give different accuracies from different initializations?
    https://www.sciencedirect.com/science/article/abs/pii/089360809190012T
    (c) If I understand correctly, the above ARTMAP takes only monolithic inputs, where "the vectors may encode visual representations of objects" not a vector of clutter science which contains irrelevant backgrounds like Cresceptron did.  A latter paper of yours deals with partial views but it does not deal with natural images of cluttered scenes.
    (2) The SOVEREIGN model discussed in your book does not start from a single cell like the human brain and does not learn incrementally.
    You wrote, "I am bewildered  by your comment above".  Probably you have not considered brain-scale development.  A zygote starts from a single cell.  Thus, the brain should start from a single cell too.   DN3 deals with brain-patterning from a single cell.   Fig. 16.42 of SOVEREIGN in your book has a static block diagram and therefore, it does not deal with brain patterning.  It has an open skull to allow you to manually inject symbols as blocks.
   (3) Your models do not explain how to learn any Turing machines.
   You wrote, "no biological neural network model of how brains make minds learn Turing machines, except in the sense that our cognitive systems, that are parts of our brains, have learned to generate emergent properties that invented and can mathematically analyze Turing machines.  Is that what you mean?"
    No, please read the paper below, how a DN overall learns any Turing Machine, not just its cognitive subsystem.  This is a necessary condition for any brain-modeling network because any such models must at least be complete in Turing machine logic.
    https://www.scirp.org/reference/referencespapers?referenceid=1400949
    Best regards,
-John
Weng, J. (2011) Three Theorems Brain-Like Networks Logically Reason and Optimally Generalize. International Joint Conference on Neural Networks, San Jose, 31 July-5 August 2011, 2983-2990. - References - Scientific Research Publishing <https://www.scirp.org/reference/referencespapers?referenceid=1400949>
Weng, J. (2011) Three Theorems Brain-Like Networks Logically Reason and Optimally Generalize. International Joint Conference on Neural Networks, San Jose, 31 July-5 August 2011, 2983-2990.
www.scirp.org

On Sat, Feb 24, 2024 at 6:32 PM Grossberg, Stephen <steve at bu.edu> wrote:

Dear John,



I reply below in italics, among your questions:



From: Weng, Juyang <weng at msu.edu<mailto:weng at msu.edu>>
Date: Saturday, February 24, 2024 at 4:44 PM
To: Jeffrey Bowers <J.Bowers at bristol.ac.uk<mailto:J.Bowers at bristol.ac.uk>>, Grossberg, Stephen <steve at bu.edu<mailto:steve at bu.edu>>, KENTRIDGE, ROBERT W. <robert.kentridge at durham.ac.uk<mailto:robert.kentridge at durham.ac.uk>>, Gary Marcus <gary.marcus at nyu.edu<mailto:gary.marcus at nyu.edu>>, Laurent Mertens <laurent.mertens at kuleuven.be<mailto:laurent.mertens at kuleuven.be>>
Cc: connectionists at mailman.srv.cs.cmu.edu<mailto:connectionists at mailman.srv.cs.cmu.edu> <connectionists at mailman.srv.cs.cmu.edu<mailto:connectionists at mailman.srv.cs.cmu.edu>>
Subject: Re: Connectionists: Early history of symbolic and neural network approaches to AI

Dear Steve,

    I have had pleasure to listen and follow your various ART models.  With your suggestions, I also bought your Book "Conscious Mind, Resonant Brain: How Each Brain Makes a Mind" and browsed it.

    Let me ask some questions that will be useful for many people on this list:
    (1) Do you agree that the grandmother cell does not explain how to learn invariances, such as location invariance, scale invariance, and orientational invariance?  Of course, those invariances are not perfect as explained in my Cresceptron paper (IJCV 197), arguably the first Deep Learning network for 3D?



The grandmother cell concept is nothing more than a verbal term. It is not a computational model, so does not learn anything.



I glanced at your Cresceptron paper. It says:



“ the user manually draws a polygon outlining the region of interest and types in the label of its class….”



It seems that this is not a self-organizing model that learns in real time through incremental learning. Our model is, and explains challenging neurobiological data along the way.



You also write that your model may be “the first Deep Learning network for 3D”. Our work does not use Deep Learning, which has 17 serious computational problems in addition to not being biologically plausible. None of these problems of back propagation and Deep Learning have been a problem for Adaptive Resonance Theory since I introduced it in 1976.



In particular, Deep Learning is both untrustworthy (because it is not explainable) and unreliable (because it can experience catastrophic forgetting).



I review these 17 problems in my 2021 Magnum Opus. You can also find them discussed in Section 17 of the following 1988 article that was published in the first issue of Neural Networks:



Grossberg, S. (1988) Nonlinear neural networks: Principles, mechanisms, and architectures.

Neural Networks, 1 , 17-61.

https://sites.bu.edu/steveg/files/2016/06/Gro1988NN.pdf<https://urldefense.com/v3/__https://sites.bu.edu/steveg/files/2016/06/Gro1988NN.pdf__;!!HXCxUKc!xN35yPj-P6e15c7znKJoQrBxAmtXY8YThFRNcdGPQeAqt2AeFfCCSk5RiXmHJEHJeJVKCv18BA$>

    (2) Your model discussed your book is not developmental, namely, does not start from a single cell like the human brain and does not learn incrementally.  Could you point me to an incremental learning algorithm in your book if what I write is incorrect?



I am bewildered by your comment above, since it is obviously not true about ANY of my neural models of brain development and learning, all of which self-organize and work in an incremental learning setting.



Such models are described in a self-contained and non-technical way in my book.



Scores of my articles about self-organizing brain development and learning are described with all technical details on my web page sites.bu.edu/steveg<http://sites.bu.edu/steveg>.



I am unclear what you mean by the phrase: “does not start from a single cell like the human brain” since you clearly do not mean that the human brain is composed of a single cell.



On the other hand, Chapter 17 of my 2021 Magnum Opus clarifies that principles of complementarity, uncertainty, and resonance that are embodied in Adaptive Resonance Theory, as well as in various of my other neural network models, also have precursors in cellular organisms that existed long before human brains did, including slime molds and Hydras.



These design principles thus seem to have been conserved for a very long time during the evolutionary process.



Principles of uncertainty, complementarity, and resonance also have analogs in the laws of physics with which our brains have ceaselessly interacted for eons during their evolution. Quantum mechanics is one example of these principles in physics.



Explaining in detail how our brains were shaped during evolution to also embody these physical principles is a long-term project worthy of a great deal of additional research.



    (3) Your model does not explain how to learn any Turing machines.



Human brains self-organize using analog signals and  parallel computations, and do so in real time. Turing machines do not have these properties.



So, yes, no biological neural network model of how brains make minds learn Turing machines, except in the sense that our cognitive systems, that are parts of our brains, have learned to generate emergent properties that invented and can mathematically analyze Turing machines.



Is that what you mean?



To explain how our brains can mathematically understand Turing machines, you first need to explain how our brains have learned to represent and use numerical representations in the first place.



The question “where do numbers come from” is a fundamental one.



Some progress has been made in modeling “where numbers come from” and how our brains can learn to use numerical representations and mathematical symbols. A LOT more work needs to be done on this fundamental problem.



Perhaps the following article may be helpful:



Grossberg, S. and Repin, D. (2003) A neural model of how the brain represents and compares multi-digit numbers: Spatial and categorical processes. Neural Networks, 16, 1107-1140.

https://sites.bu.edu/steveg/files/2016/06/GroRep2003NN.pdf<https://urldefense.com/v3/__https://sites.bu.edu/steveg/files/2016/06/GroRep2003NN.pdf__;!!HXCxUKc!xN35yPj-P6e15c7znKJoQrBxAmtXY8YThFRNcdGPQeAqt2AeFfCCSk5RiXmHJEHJeJXNTJH1kg$>



Best,



Steve

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20240225/0c6d7766/attachment-0001.html>


More information about the Connectionists mailing list