<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<!--[if !mso]><style>v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style><![endif]--><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:Tahoma;
panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
font-size:11.0pt;
font-family:"Calibri",sans-serif;
color:black;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
pre
{mso-style-priority:99;
mso-style-link:"HTML Preformatted Char";
margin:0in;
font-size:10.0pt;
font-family:"Courier New";
color:black;}
span.HTMLPreformattedChar
{mso-style-name:"HTML Preformatted Char";
mso-style-priority:99;
mso-style-link:"HTML Preformatted";
font-family:"Courier New";
color:black;}
span.EmailStyle21
{mso-style-type:personal-reply;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="EN-US" link="#0563C1" vlink="#954F72" style="word-wrap:break-word">
<div class="WordSection1">
<pre><span style="font-size:11.0pt;font-family:"Calibri",sans-serif">Responding to Stephen José Hanson’s comment:<o:p></o:p></span></pre>
<pre><span style="font-size:11.0pt;font-family:"Calibri",sans-serif"><o:p> </o:p></span></pre>
<pre><b><u><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#C00000;background:yellow;mso-highlight:yellow">“Nonetheless, as you can guess, I am countering your claim: your prediction is not going to happen.. there will be no merging of symbols and NN in the near or distant future, because it would be useless.”</span></u></b><b><u><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#C00000"><o:p></o:p></span></u></b></pre>
<pre><u><span style="font-size:11.0pt;font-family:"Calibri",sans-serif"><o:p><span style="text-decoration:none"> </span></o:p></span></u></pre>
<pre><span style="font-size:11.0pt;font-family:"Calibri",sans-serif">Steve, even a simple CNN that recognizes just Cats and Dogs sends an output signal that is symbolic. So, a basic NN classifier is a neuro-symbolic system. Such systems are there already, contrary to your statement above. One of the next steps is to extract more symbolic information from these systems. That’s what you find in Hinton’s GLOM approach – finding parts of objects. Once you find those parts, which essentially correspond to certain abstractions (e.g. a leg, an eye of a cat), you can then transmit that information in symbolic form to whoever is receiving the whole object information. Beyond GLOM, there are many other methods in computer vision that are trying to do the same thing – that is, extract part information. I can send you references if you want. So neuro-symbolic stuff is already happening, contrary to what you are saying. Gary’s reference to the IBM conference indicates this is an emerging topic in AI. And, of course, </span><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext">you also have </span><span style="font-size:11.0pt;font-family:"Calibri",sans-serif">the GLOM type work. In addition, DARPA’s conception of Explainable AI (<a href="https://www.darpa.mil/program/explainable-artificial-intelligence">Explainable Artificial Intelligence (darpa.mil)</a>) was also neuro-symbolic as shown in the figure below. The idea is to identify objects based on their parts. So, the figure below says that it’s a cat because it has fur, whiskers, and claws plus an unlabeled visual feature. <o:p></o:p></span></pre>
<pre><span style="font-size:11.0pt;font-family:"Calibri",sans-serif"><o:p> </o:p></span></pre>
<pre><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext">Below are also two</span><span style="font-size:11.0pt;font-family:"Calibri",sans-serif"> figures from Doran et al. 2017 that explains how neuro-symbolic would work. The second figure, Fig. 5, shows how a reasoning system might work. And that’s very similar to how we reason in our heads. Hope this helps. <o:p></o:p></span></pre>
<pre><span style="font-size:11.0pt;font-family:"Calibri",sans-serif"><o:p> </o:p></span></pre>
<pre><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;background:yellow;mso-highlight:yellow">The way forward is indeed neuro-symbolic, as Gary said, and it’s happening now, with perhaps Hinton’s GLOM showing the way.</span><span style="font-size:11.0pt;font-family:"Calibri",sans-serif"><o:p></o:p></span></pre>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal"><span style="color:#C00000;background:white">Doran, D., Schulz, S., & Besold, T. R. (2017). What does explainable AI really mean? A new conceptualization of perspectives. <i>arXiv preprint arXiv:1710.00794</i>.</span><span style="color:#C00000"><o:p></o:p></span></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Asim Roy<o:p></o:p></p>
<p class="MsoNormal">Professor, Information Systems<o:p></o:p></p>
<p class="MsoNormal">Arizona State University<o:p></o:p></p>
<p class="MsoNormal"><a href="https://lifeboat.com/ex/bios.asim.roy">Lifeboat Foundation Bios: Professor Asim Roy</a><o:p></o:p></p>
<p class="MsoNormal"><a href="https://isearch.asu.edu/profile/9973">Asim Roy | iSearch (asu.edu)</a><o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal"><img border="0" width="813" height="485" style="width:8.4687in;height:5.052in" id="Picture_x0020_4" src="cid:image001.png@01D92B7E.0A81C250" alt="Timeline
Description automatically generated"><o:p></o:p></p>
<p class="MsoNormal"><span style="color:windowtext"><o:p> </o:p></span></p>
<p class="MsoNormal"><img border="0" width="554" height="464" style="width:5.7708in;height:4.8333in" id="Picture_x0020_2" src="cid:image002.png@01D92B7E.0A81C250"><span style="color:windowtext"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="color:windowtext"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="color:windowtext"><img border="0" width="858" height="480" style="width:8.9375in;height:5.0in" id="Picture_x0020_3" src="cid:image003.png@01D92B7E.0A81C250"></span><span style="color:windowtext"><o:p></o:p></span></p>
<p class="MsoNormal"><span style="color:windowtext"><o:p> </o:p></span></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">======================================================================================================================================================<o:p></o:p></p>
<p><span style="font-size:13.5pt;font-family:"Courier New"">Gary, </span><o:p></o:p></p>
<pre>"vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here"<o:p></o:p></pre>
<pre><o:p> </o:p></pre>
<p class="MsoNormal"><span style="font-family:"Courier New"">As usual you are distorting the point here. What Juergen is chronicling<b>
</b>is about WORKING AI--(the big bang aside for a moment) and I think we do agree on some of the LLM nonsense that is in a nyperbolic loop at this point.
<br>
<br>
But AI from the 70s, frankly failed including NN. Expert systems, the apex application...couldn't even suggest decent wines.<br>
langauge understanding, planning etc.. please point to us what working systems are you talking about? These things are broken. Why would we try to blend broken systems with a classifier that has human to super human classification accuracy? What would it do?pick
up that last 1% of error? Explain the VGG? We don't know how these DLs work in any case... good luck on that! (see comments on this topic with Yann and Me in the recent WIAS series!)<br>
<br>
Frankly, the last gasp of AI in the 70s was the US gov 5th generation response in Austin Texas--MCC.(launched in the early 80s).. after shaking down 100s of companies 1M$ a year.. and plowing all the monies into reasoning, planning and NL KRep.. oh yeah.. Doug
Lenat.. who predicted every year we went down there that CYC would become intelligent in 2001! maybe 2010! I was part of the group from Bell Labs that was supposed to provide analysis and harvest the AI fiesta each year.. there was nothing. What survived of
CYC, and NL and reasoning breakthroughs? There was nothing. Nothing survived this money party.
<br>
<br>
So here we are where NN comes back (just as CYC was to burst into intelligence!) under rather unlikely and seemingly marginal tweeks to NN backprop algo, and works pretty much daily with breakthroughs.. ignoring LLM for the moment.. which I believe are likely
to crash in on themselves.<br>
<br>
Nonetheless, as you can guess, I am countering your claim: your prediction is not going to happen.. there will be no merging of symbols and NN in the near or distant future, because it would be useless.<br>
<br>
Best,<br>
<br>
Steve</span><o:p></o:p></p>
<p class="MsoNormal">On 1/14/23 07:04, Gary Marcus wrote:<o:p></o:p></p>
<pre>Dear Juergen,<o:p></o:p></pre>
<pre><o:p> </o:p></pre>
<pre>You have made a good case that the history of deep learning is often misrepresented. But, by parity of reasoning, a few pointers to a tiny fraction of the work done in symbolic AI does not in any way make this a thorough and balanced exercise with respect to the field as a whole.<o:p></o:p></pre>
<pre><o:p> </o:p></pre>
<pre>I am 100% with Andrzej Wichert, in thinking that vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here. A few pointers to theorem proving and the like does not solve that. <o:p></o:p></pre>
<pre><o:p> </o:p></pre>
<pre>Your essay is a fine if opinionated history of deep learning, with a special emphasis on your own work, but of somewhat limited value beyond a few terse references in explicating other approaches to AI. This would be ok if the title and aspiration didn’t aim for as a whole; if you really want the paper to reflect the field as a whole, and the ambitions of the title, you have more work to do. <o:p></o:p></pre>
<pre><o:p> </o:p></pre>
<pre>My own hunch is that in a decade, maybe much sooner, a major emphasis of the field will be on neurosymbolic integration. Your own startup is heading in that direction, and the commericial desire to make LLMs reliable and truthful will also push in that direction. <o:p></o:p></pre>
<pre>Historians looking back on this paper will see too little about that roots of that trend documented here.<o:p></o:p></pre>
<pre><o:p> </o:p></pre>
<pre>Gary <o:p></o:p></pre>
<pre><o:p> </o:p></pre>
<pre>On Jan 14, 2023, at 12:42 AM, Schmidhuber Juergen <a href="mailto:juergen@idsia.ch"><juergen@idsia.ch></a> wrote:<o:p></o:p></pre>
<pre><o:p> </o:p></pre>
<pre><span style="font-family:"Tahoma",sans-serif"></span>Dear Andrzej, thanks, but come on, the report cites lots of “symbolic” AI from theorem proving (e.g., Zuse 1948) to later surveys of expert systems and “traditional" AI. Note that Sec. 18 and Sec. 19 go back even much further in time (not even speaking of Sec. 20). The survey also explains why AI histories written in the 1980s/2000s/2020s differ. Here again the table of contents:<o:p></o:p></pre>
<pre><o:p> </o:p></pre>
<pre>Sec. 1: Introduction<o:p></o:p></pre>
<pre>Sec. 2: 1676: The Chain Rule For Backward Credit Assignment<o:p></o:p></pre>
<pre>Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow Learning<o:p></o:p></pre>
<pre>Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First Learning RNNs<o:p></o:p></pre>
<pre>Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning)<o:p></o:p></pre>
<pre>Sec. 6: 1965: First Deep Learning<o:p></o:p></pre>
<pre>Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent <o:p></o:p></pre>
<pre>Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor. <o:p></o:p></pre>
<pre>Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units) <o:p></o:p></pre>
<pre>Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More RNNs / Etc<o:p></o:p></pre>
<pre>Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity / NN Online Planners<o:p></o:p></pre>
<pre>Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command <o:p></o:p></pre>
<pre>Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with Linearized Self-Attention<o:p></o:p></pre>
<pre>Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. Distilling NNs<o:p></o:p></pre>
<pre>Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding Gradients<o:p></o:p></pre>
<pre>Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / ResNets<o:p></o:p></pre>
<pre>Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher <o:p></o:p></pre>
<pre>Sec. 18: It's the Hardware, Stupid!<o:p></o:p></pre>
<pre>Sec. 19: But Don't Neglect the Theory of AI (Since 1931) and Computer Science<o:p></o:p></pre>
<pre>Sec. 20: The Broader Historic Context from Big Bang to Far Future<o:p></o:p></pre>
<pre>Sec. 21: Acknowledgments<o:p></o:p></pre>
<pre>Sec. 22: 555+ Partially Annotated References (many more in the award-winning survey [DL1])<o:p></o:p></pre>
<pre><o:p> </o:p></pre>
<pre>Tweet: <a href="https://urldefense.com/v3/__https:/nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.proofpoint.com*2Fv2*2Furl*3Fu*3Dhttps-3A__twitter.com_SchmidhuberAI_status_1606333832956973060-3Fcxt-3DHHwWiMC8gYiH7MosAAAA*26d*3DDwIDaQ*26c*3DslrrB7dE8n7gBJbeO0g-IQ*26r*3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ*26m*3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5*26s*3DnWCXLKazOjmixYrJVR0CMlR12PasGbAd8bsS6VZ10bk*26e*3D&data=05*7C01*7Cjose*40rubic.rutgers.edu*7C6eb497ffe7f64842421f08daf7190859*7Cb92d2b234d35447093ff69aca6632ffe*7C1*7C0*7C638093984139939233*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C3000*7C*7C*7C&sdata=d3D0CnyBV09ghc1hUTQaeV7xQ8qZEsqPnPNMsNZEikU*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSU!!IKRxdwAv5BmarQ!YLfwpD0W2xSYGrrWRy4q0vbCNy1WUd6svPXZDt8ka9C4MCQ0BZk9qmNNrXm7i8KfbhGq5IwFa4cjKCl8EJE$">https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__twitter.com_SchmidhuberAI_status_1606333832956973060-3Fcxt-3DHHwWiMC8gYiH7MosAAAA%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3DnWCXLKazOjmixYrJVR0CMlR12PasGbAd8bsS6VZ10bk%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=d3D0CnyBV09ghc1hUTQaeV7xQ8qZEsqPnPNMsNZEikU%3D&reserved=0</a> <o:p></o:p></pre>
<pre><o:p> </o:p></pre>
<pre>Jürgen<o:p></o:p></pre>
<pre><o:p> </o:p></pre>
<pre><o:p> </o:p></pre>
<pre><o:p> </o:p></pre>
<pre><o:p> </o:p></pre>
<pre><o:p> </o:p></pre>
<pre>On 13. Jan 2023, at 14:40, Andrzej Wichert <a href="mailto:andreas.wichert@tecnico.ulisboa.pt"><andreas.wichert@tecnico.ulisboa.pt></a> wrote:<o:p></o:p></pre>
<pre>Dear Juergen,<o:p></o:p></pre>
<pre>You make the same mistake at it was done in the earlier 1970. You identify deep learning with modern AI, the paper should be called instead "Annotated History of Deep Learning”<o:p></o:p></pre>
<pre>Otherwise, you ignore symbolical AI, like search, production systems, knowledge representation, search, planning etc., as if is not part of AI anymore (suggested by your title).<o:p></o:p></pre>
<pre>Best,<o:p></o:p></pre>
<pre>Andreas<o:p></o:p></pre>
<pre>--------------------------------------------------------------------------------------------------<o:p></o:p></pre>
<pre>Prof. Auxiliar Andreas Wichert <o:p></o:p></pre>
<pre><a href="https://urldefense.com/v3/__https:/nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.proofpoint.com*2Fv2*2Furl*3Fu*3Dhttp-3A__web.tecnico.ulisboa.pt_andreas.wichert_*26d*3DDwIDaQ*26c*3DslrrB7dE8n7gBJbeO0g-IQ*26r*3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ*26m*3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5*26s*3Dh5Zy9Hk2IoWPt7me1mLhcYHEuJ55mmNOAppZKcivxAk*26e*3D&data=05*7C01*7Cjose*40rubic.rutgers.edu*7C6eb497ffe7f64842421f08daf7190859*7Cb92d2b234d35447093ff69aca6632ffe*7C1*7C0*7C638093984139939233*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C3000*7C*7C*7C&sdata=T9o8A*2BqpAnwm2ZU7NVpQ9cDbfT1*2FlHXRecj0BkMlKc4*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!IKRxdwAv5BmarQ!YLfwpD0W2xSYGrrWRy4q0vbCNy1WUd6svPXZDt8ka9C4MCQ0BZk9qmNNrXm7i8KfbhGq5IwFa4cj9FEtvLQ$">https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttp-3A__web.tecnico.ulisboa.pt_andreas.wichert_%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3Dh5Zy9Hk2IoWPt7me1mLhcYHEuJ55mmNOAppZKcivxAk%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=T9o8A%2BqpAnwm2ZU7NVpQ9cDbfT1%2FlHXRecj0BkMlKc4%3D&reserved=0</a><o:p></o:p></pre>
<pre>-<o:p></o:p></pre>
<pre><a href="https://urldefense.com/v3/__https:/nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.proofpoint.com*2Fv2*2Furl*3Fu*3Dhttps-3A__www.amazon.com_author_andreaswichert*26d*3DDwIDaQ*26c*3DslrrB7dE8n7gBJbeO0g-IQ*26r*3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ*26m*3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5*26s*3Dw1RtYvs8dwtfvlTkHqP_P-74ITvUW2IiHLSai7br25U*26e*3D&data=05*7C01*7Cjose*40rubic.rutgers.edu*7C6eb497ffe7f64842421f08daf7190859*7Cb92d2b234d35447093ff69aca6632ffe*7C1*7C0*7C638093984139939233*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C3000*7C*7C*7C&sdata=O*2BrX17IhxFQcXK0VClZM6sJqHH5UEpEDXgQZGqUTtVk*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!IKRxdwAv5BmarQ!YLfwpD0W2xSYGrrWRy4q0vbCNy1WUd6svPXZDt8ka9C4MCQ0BZk9qmNNrXm7i8KfbhGq5IwFa4cju4E4bKQ$">https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__www.amazon.com_author_andreaswichert%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3Dw1RtYvs8dwtfvlTkHqP_P-74ITvUW2IiHLSai7br25U%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=O%2BrX17IhxFQcXK0VClZM6sJqHH5UEpEDXgQZGqUTtVk%3D&reserved=0</a><o:p></o:p></pre>
<pre>Instituto Superior Técnico - Universidade de Lisboa<o:p></o:p></pre>
<pre>Campus IST-Taguspark<o:p></o:p></pre>
<pre>Avenida Professor Cavaco Silva Phone: +351 214233231<o:p></o:p></pre>
<pre>2744-016 Porto Salvo, Portugal<o:p></o:p></pre>
<pre>On 13 Jan 2023, at 08:13, Schmidhuber Juergen <a href="mailto:juergen@idsia.ch"><juergen@idsia.ch></a> wrote:<o:p></o:p></pre>
<pre>Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey):<o:p></o:p></pre>
<pre><a href="https://urldefense.com/v3/__https:/nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.proofpoint.com*2Fv2*2Furl*3Fu*3Dhttps-3A__arxiv.org_abs_2212.11279*26d*3DDwIDaQ*26c*3DslrrB7dE8n7gBJbeO0g-IQ*26r*3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ*26m*3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5*26s*3D6E5_tonSfNtoMPw1fvFOm8UFm7tDVH7un_kbogNG_1w*26e*3D&data=05*7C01*7Cjose*40rubic.rutgers.edu*7C6eb497ffe7f64842421f08daf7190859*7Cb92d2b234d35447093ff69aca6632ffe*7C1*7C0*7C638093984139939233*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C3000*7C*7C*7C&sdata=P5iVvvYBN4H26Bad7eJZAj9*2B*2B0dfWOPKQozWrsCLpXU*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!IKRxdwAv5BmarQ!YLfwpD0W2xSYGrrWRy4q0vbCNy1WUd6svPXZDt8ka9C4MCQ0BZk9qmNNrXm7i8KfbhGq5IwFa4cjfVkq2_w$">https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__arxiv.org_abs_2212.11279%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3D6E5_tonSfNtoMPw1fvFOm8UFm7tDVH7un_kbogNG_1w%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=P5iVvvYBN4H26Bad7eJZAj9%2B%2B0dfWOPKQozWrsCLpXU%3D&reserved=0</a><o:p></o:p></pre>
<pre><a href="https://urldefense.com/v3/__https:/nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.proofpoint.com*2Fv2*2Furl*3Fu*3Dhttps-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html*26d*3DDwIDaQ*26c*3DslrrB7dE8n7gBJbeO0g-IQ*26r*3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ*26m*3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5*26s*3DXPnftI8leeqoElbWQIApFNQ2L4gDcrGy_eiJv2ZPYYk*26e*3D&data=05*7C01*7Cjose*40rubic.rutgers.edu*7C6eb497ffe7f64842421f08daf7190859*7Cb92d2b234d35447093ff69aca6632ffe*7C1*7C0*7C638093984139939233*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C3000*7C*7C*7C&sdata=DQkAmR9EaFS7TTEtJzpkumEsbjsQ*2BQNYcnrNs1umD*2BM*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!IKRxdwAv5BmarQ!YLfwpD0W2xSYGrrWRy4q0vbCNy1WUd6svPXZDt8ka9C4MCQ0BZk9qmNNrXm7i8KfbhGq5IwFa4cjZ2uViQU$">https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3DXPnftI8leeqoElbWQIApFNQ2L4gDcrGy_eiJv2ZPYYk%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=DQkAmR9EaFS7TTEtJzpkumEsbjsQ%2BQNYcnrNs1umD%2BM%3D&reserved=0</a><o:p></o:p></pre>
<pre>This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under <a href="mailto:juergen@idsia.ch">juergen@idsia.ch</a> if you can spot any remaining error or have suggestions for improvements.<o:p></o:p></pre>
<pre>Happy New Year!<o:p></o:p></pre>
<pre>Jürgen<o:p></o:p></pre>
<pre><o:p> </o:p></pre>
<pre>-- <o:p></o:p></pre>
<pre>Stephen José Hanson<o:p></o:p></pre>
<pre>Professor, Psychology Department<o:p></o:p></pre>
<pre>Director, RUBIC (Rutgers University Brain Imaging Center)<o:p></o:p></pre>
<pre>Member, Executive Committee, RUCCS<o:p></o:p></pre>
<p class="MsoNormal"><span style="color:windowtext"><o:p> </o:p></span></p>
</div>
</body>
</html>