Connectionists: Scientific Integrity, the 2021 Turing Lecture, etc.

Juyang Weng juyang.weng at gmail.com
Mon Jan 3 19:53:21 EST 2022


Schmidhuber Juergen wrote:

"Steve, almost all of deep learning is about engineering and problem
solving, not about explaining or modeling biological
neurons/synapses/dendrites."

You would be surprised:  Any model about modeling biological brains needs
CONSCIOUSNESS as a necessary condition.

My model for biological brains has been rejected by AAAI 2021, ICDL 2021,
and finally accepted by an IEEE Electronics Conference!  Human nature is
blocking our capability to understand biological brains.  We often
mention biological brains.  But when a model is presented, we cannot
understand it by a huge margin.

As correctly challenged by Michael Jordan \cite{Gome14}, the present model
must holistically solve some open problems.   The present work holistically
solves the following 20  "million-dollar problems":

1. the image-annotation problem (e.g., giving retina a bounding box to
learn as in ImageNet \cite{Russakovsky15}),

2. the sensorimotor recurrence problem (e.g., any big data sets are invalid
\cite{WengPSUTS21}),

3. the motor-supervision problem (e.g., impractical to supervise motors all
the time),

4. the sensor calibration problem (e.g., a life calibrates the eyes
automatically),

5. the inverse kinematics problem (e.g., a life calibrates all redundant
limbs automatically),

6. the government-free problem (i.e., no intelligent governments inside the
brain),

7. the closed-skull problem (e.g., supervising hidden neurons are not
biologically plausible),

8. the nonlinear controller problem (e.g., a brain is a highly nonlinear
controller),

9. the curse of dimensionality problem (e.g., too many receptors on the
retina),

10. the under-sample problem (i.e., few available events in a life
\cite{WengLCA09}),

11. the distributed vs. local representations problem (i.e., both
representations emerge),

12. the frame problem (also called symbol grounding problem, thus must be
free from any symbols),

13. the local minima problem (so, must avoid error-backprop learning
\cite{Krizhevsky17,LeCun15}),

14. the abstraction problem (i.e., require various invariances and
transfers) \cite{WengIEEE-IS2014},

15. the rule-like manipulation problem (e.g., not just fitting big data
\cite{Harnad90,WengIJHR2020}),

16. the smooth representations problem (e.g., so as to recruit neurons
under brain injuries \cite{Elman97,Wu2019DN-2}),

17. the motivation problem (e.g., including reinforcement and various
emotions \cite{Dreyfus92,WengNAI2e}),

18. the global optimality problem (e.g., comparisons under the Three
Learning Conditions below\cite{WengPSUTS-ICDL21}),

19. the auto-programming for general purposes problem (e.g., writing a
complex program \cite{WengIJHR2020}) and

20. the brain-thinking problem (e.g., planning and discovery
\cite{Turing50,WuThink21}).



Best regards,
-John
-- 
Juyang (John) Weng
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220103/b34fd60e/attachment.html>


More information about the Connectionists mailing list