Connectionists: The symbolist quagmire

Gary Marcus gary.marcus at nyu.edu
Tue Jun 21 17:19:52 EDT 2022


not that i really know what consciousness is, but i doubt that it is requirement for any of the challenges i have raised, eg with respect to common sense or natural language understanding. 

systems like AlphaFold and turn-by-turn directions presumably lack consciousness but give us perfectly reasonable answers using symbolic inputs. I don’t see why more general forms of AI need to be different, though they undoubtedly will require richer representations than are currently trendy.

> On Jun 21, 2022, at 2:14 PM, Juyang Weng <juyang.weng at gmail.com> wrote:
> 
> 
> Dear Gary,
> 
> You wrote: "My own view is that arguments around symbols per se are not very productive, and that the more interesting questions center around what you *do* with symbols once you have them.  If you take symbols to be patterns of information that stand for other things, like ASCII encodings, or individual bits for features (e.g. On or Off for a thermostat state), then practically every computational model anywhere on the spectrum makes use of symbols. For example the inputs and outputs (perhaps after a winner-take-all operation or somesuch) of typical neural networks are symbols in this sense, standing for things like individual words, characters, directions on a joystick etc."  
> 
> I respectfully do not agree, since that is why "practically every computational model anywhere" cannot learn consciousness.  They are basically pattern recognition machines for a specific task.  
> 
> I  skip "data selection" in deep learning here.   Deep learning not only hits a wall.  All its published data appear to be invalid.
> 
> Gary, this issue is probably too fundamental if you do not try to understand the conscious learning algorithm (see below), first ever in the world, as far as I humbly aware of.  
> 
> Let me try in intuitive terms:  
> 
> (1) You have a series of ASCII symbols, e.g., ASCII-1, ASCII-2, ASCII-3, ASCII-4 ...  You have 1 million such ASCII symbols.  Any number, as long as it is a large number.
> 
> (2) You specify the meanings of such ASCII symbols in your design documents:
>  ASCII-1: forward-move-of-joystick-A,
> ASCII-2: backward-move-of-joystick-A,
> ASCII-3:left-move-of-joystick-A, 
> ASCII-4: right-move-of-joystick-A 
> ...
> You have at  least 1 millions of lines.
> 
> (3) Your machine does not read your design document in (2), they cannot think about your design document in (2).  They only learn the mapping from sensory inputs to one of these ASCII symbols.
> 
> (4) Therefore, your machine is not able to understand the consciousness that is required to judge that it is doing a joystick work (e.g., driving using a joystick) well, because your knowledge hierarchy (using these 1 million symbols) are static.  The machine cannot recompose new meanings from these symbols, because it does not understand any symbols at all!  Why do I understand my moving forward?   I do not have (2).  Moving forward is my own intent, my own volition!  I feel the effects of my volition and decide whether I want to repeat. 
>  
> (5) Without consciousness, machine learning is static.   Consciousness must go beyond any static hierarchy. 
> (a) My children do.  They told me some views (and intents) that surprise me.  I did not teach such views. 
> (b) That is also why a human brain can do research.  My subject research surprised my father-in-law and he does not believe I can do what I told him I can.
> 
> In summary, all ASCII symbols are a dead end.  They like AI drugs, are addictive, and waste our resources in AI.   
> 
> As the first ever conscious learning algorithm, the DN-3 neural network must autonomously create any fluid hierarchy that any consciousness requires during human-like thinking.
> Please read the first conscious learning algorithm that will be able to do scientific research in the future:
> 
> Peer reviewed version:
> @INPROCEEDINGS{WengCLAIEE22
> ,AUTHOR= "J. Weng"
> ,TITLE= "An Algorithmic Theory of Conscious Learning"
> ,BOOKTITLE= "2022 3rd Int'l Conf. on Artificial Intelligence in Electronics Engineering"
> ,ADDRESS= "Bangkok, Thailand"
> ,PAGES= "1-10"
> ,MONTH= "Jan. 11-13"
> ,YEAR= "2022"
> ,NOTE="\url{http://www.cse.msu.edu/~weng/research/ConsciousLearning-AIEE22rvsd-cite.pdf}"
> }
> 
> Not yet peer reviewed:
> @misc{WengDN3-RS22
> ,AUTHOR= "J. Weng"
> ,TITLE= "A Developmental Network Model of Conscious Learning in Biological Brains"
> ,Howpublished= "Research Square"
> ,PAGES= "1-32"
> ,MONTH= "June 7"
> ,YEAR= "2022"
> ,NOTE="doi: \url{https://doi.org/10.21203/rs.3.rs-1700782/v2}, desk-rejected by {\em Nature}, {\em Science}, {\em PNAS}, {\em Neural Networks} and {\em ArXiv}" 
> }
> 
> Please kindly read them, get excited and ask questions.
> 
> Best regards,
> -John
> -- 
> Juyang (John) Weng
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220621/989cc0f9/attachment.html>


More information about the Connectionists mailing list