Connectionists: A challenge to Post-Selections in Deep Learning

Juyang Weng juyang.weng at gmail.com
Thu Mar 10 17:21:37 EST 2022


Through a review of AI papers published in Nature since 2015, this report
discusses the technical flaws called Post-Selection in the charged papers.
This report suggests the appropriate protocol, explains reasons for the
protocol, why what the papers have done is inappropriate and therefore
yields misleading results. The charges below are applicable to whole
systems and system components, and in all learning modes, including
supervised, reinforcement, and swarm learning modes, since the concepts
about training sets, validation sets, and test sets all apply. A
reinforcement-learning algorithm includes not only a handcrafted form of
task-specific, desired answers but also values of all answers, desired and
undesired. A supervised learning method typically does not provide values
for intermediate steps (e.g., hidden features), but in contrast, a
reinforcement learning mode must provide values for intermediate steps
using a greedy search (e.g., time discount). Casting dice is the key
protocol flaw that owes a due transparency about all losers (e.g., how good
they are). A commercial product is impractical if it requires every
customer to cast dice and almost all trained “lives” must cause accidents
and be punished by deaths except the luckiest “life”. All the losers and
the luckiest are unethically determined by so called “unseen” (in fact
should be called “first seen”) test sets but the human programmer saw all
the scores before he decided who are losers and who is the luckiest. Such a
deep learning methodology gives no product credibility.

http://www.cse.msu.edu/%7eweng/research/2021-06-28-Report-to-Nature-specific-PSUTS.pdf

-- 
Juyang (John) Weng
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220310/22b95e6f/attachment.html>


More information about the Connectionists mailing list