Connectionists: Annotated History of Modern AI and Deep Learning

Stephen José Hanson jose at rubic.rutgers.edu
Mon Jan 16 13:18:10 EST 2023


On 1/16/23 12:37, Gary Marcus wrote:
Dear Stephen,

Despite our agreement that LLMs are overhyped, I still have to take exception to this one.
Well its a start.

Working AI is not and has never been all neural networks.

Unfortunately recently it has been. Although NN per se failed in the 70s.. and  soldiers like Grossberg kept the lamp burning through the 70s and early 80s, until Boltzmann and Backprop appeared in 1984-6.


Relatively few commercial applications are pure neural nets, even today. Empirically there is lot of "working" hybrid and even symbolic AI out there already. Google Search, one of the most widely used commercial products, is and has been for a long time a combination of symbolic AI and deep learning.

Really, I thougnt it was inverse FFTs?  Oh maybe you consider any non DL algorithms symbolic?  I mean your definition here has alway be slippery.. especailly when you give talks.

Vehicle navigation systems, also very widely used commercially, are largely or perhaps entirely symbolic. Hmm except for Musk's attempts, which probably were hybrid or completely symbolic  (but his cars hit people -right?), I thought Waymo was completely DL, with driver help;

 Many of the nascent efforts to turn LLMs into search engines appear to be hybrid systems, combining classical search with LLMs, and so on.  And as mentioned, Juergen's own company, trying to reshape industrial AI, is neurosymbolic in its fundamental design.
Really?  I don't think so.. Juergen.. are you doing Neurosymbolic things?  Oh wait is LSTM symbolic Gary?

Neurosymbolic AI is also an active area of research; DeepMind for example, just released a fascinating paper that translates symbolic code intro Transformer-like neural networks. DeepMind is engaging in such research precisely because they recognize its potential importance. There are now also whole conferences devoted to Neurosymbolic AI, like this one next week: https://ibm.github.io/neuro-symbolic-ai/events/ns-workshop2023/<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fibm.github.io%2Fneuro-symbolic-ai%2Fevents%2Fns-workshop2023%2F&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C24bd81cd233f4b1703bb08daf7e86ccc%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638094874880197796%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=NhGgYnrRF3iOVLl5LFva3V4WcJq5CMUPCU7D62LJC5U%3D&reserved=0>

Yeah there are also Flat Earth International Conferences too: https://www.youtube.com/watch?v=4ylYvNnP1r

Pretending none of this exists is just silly. You can place your bets as you like, but there seems to be a lot that you seem unaware of, both in terms of research and commercial application.

Actually, I'm pretty much upto date on what's happening.  Pretending something exists that doesn't  and probably never weill is  delusional.

Steve

Gary



On Jan 16, 2023, at 05:19, Stephen José Hanson <jose at rubic.rutgers.edu><mailto:jose at rubic.rutgers.edu> wrote:



Gary,

"vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here"



As usual you are distorting the point here. What Juergen is chronicling is about WORKING AI--(the big bang aside for a moment) and I think we do agree on some of the LLM nonsense that is in a nyperbolic loop at this point.

But AI from the 70s, frankly failed including NN.   Expert systems, the apex application...couldn't even suggest decent wines.
langauge understanding, planning etc.. please point to us what working systems are you talking about? These things are broken. Why would we try to blend broken systems with a classifier that has human to super human classification accuracy? What would it do?pick up that last 1% of error?  Explain the VGG? We don't know how these DLs work in any case... good luck on that! (see comments on this topic with Yann and Me in the recent WIAS series!)

Frankly, the last gasp of AI in the 70s was the US gov 5th generation response in Austin Texas--MCC.(launched in the early 80s).. after shaking down 100s of companies 1M$ a year.. and plowing all the monies into reasoning, planning and NL KRep.. oh yeah.. Doug Lenat.. who predicted every year we went down there that CYC would become intelligent in 2001! maybe 2010! I was part of the group from Bell Labs that was supposed to provide analysis and harvest the AI fiesta each year.. there was nothing. What survived of CYC, and NL and reasoning breakthroughs? There was nothing. Nothing survived this money party.

So here we are where NN comes back (just as CYC was to burst into intelligence!) under rather unlikely and seemingly marginal tweeks to NN backprop algo, and works pretty much daily with breakthroughs.. ignoring LLM for the moment.. which I believe are likely to crash in on themselves.

Nonetheless, as you can guess, I am countering your claim: your prediction is not going to happen.. there will be no merging of symbols and NN in the near or distant future, because it would be useless.

Best,

Steve
On 1/14/23 07:04, Gary Marcus wrote:

Dear Juergen,

You have made a good case that the history of deep learning is often misrepresented. But, by parity of reasoning, a few pointers to a tiny fraction of the work done in symbolic AI does not in any way make this a thorough and balanced exercise with respect to the field as a whole.

I am 100% with Andrzej Wichert, in thinking that vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here. A few pointers to theorem proving and the like does not solve that.

Your essay is a fine if opinionated history of deep learning, with a special emphasis on your own work, but of somewhat limited value beyond a few terse references in explicating other approaches to AI. This would be ok if the title and aspiration didn’t aim for as a whole; if you really want the paper to reflect the field as a whole, and the ambitions of the title, you have more work to do.

My own hunch is that in a decade, maybe much sooner, a major emphasis of the field will be on neurosymbolic integration. Your own startup is heading in that direction, and the commericial desire to make LLMs reliable and truthful will also push in that direction.
Historians looking back on this paper will see too little about that roots of that trend documented here.

Gary



On Jan 14, 2023, at 12:42 AM, Schmidhuber Juergen <juergen at idsia.ch><mailto:juergen at idsia.ch> wrote:

Dear Andrzej, thanks, but come on, the report cites lots of “symbolic” AI from theorem proving (e.g., Zuse 1948) to later surveys of expert systems and “traditional" AI. Note that Sec. 18 and Sec. 19 go back even much further in time (not even speaking of Sec. 20). The survey also explains why AI histories written in the 1980s/2000s/2020s differ. Here again the table of contents:

Sec. 1: Introduction
Sec. 2: 1676: The Chain Rule For Backward Credit Assignment
Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow Learning
Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First Learning RNNs
Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning)
Sec. 6: 1965: First Deep Learning
Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent
Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor.
Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units)
Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More RNNs / Etc
Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity / NN Online Planners
Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command
Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with Linearized Self-Attention
Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. Distilling NNs
Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding Gradients
Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / ResNets
Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher
Sec. 18: It's the Hardware, Stupid!
Sec. 19: But Don't Neglect the Theory of AI (Since 1931) and Computer Science
Sec. 20: The Broader Historic Context from Big Bang to Far Future
Sec. 21: Acknowledgments
Sec. 22: 555+ Partially Annotated References (many more in the award-winning survey [DL1])

Tweet: https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__twitter.com_SchmidhuberAI_status_1606333832956973060-3Fcxt-3DHHwWiMC8gYiH7MosAAAA%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3DnWCXLKazOjmixYrJVR0CMlR12PasGbAd8bsS6VZ10bk%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=d3D0CnyBV09ghc1hUTQaeV7xQ8qZEsqPnPNMsNZEikU%3D&reserved=0<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__nam02.safelinks.protection.outlook.com_-3Furl-3Dhttps-253A-252F-252Furldefense.proofpoint.com-252Fv2-252Furl-253Fu-253Dhttps-2D3A-5F-5Ftwitter.com-5FSchmidhuberAI-5Fstatus-5F1606333832956973060-2D3Fcxt-2D3DHHwWiMC8gYiH7MosAAAA-2526d-253DDwIDaQ-2526c-253DslrrB7dE8n7gBJbeO0g-2DIQ-2526r-253DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ-2526m-253DoGn-2DOID5YOewbgo3j-5FHjFjI3I2N3hx-2Dw0hoIfLR-5FJJsn8q5UZDYAl5HOHPY-2D87N5-2526s-253DnWCXLKazOjmixYrJVR0CMlR12PasGbAd8bsS6VZ10bk-2526e-253D-26data-3D05-257C01-257Cjose-2540rubic.rutgers.edu-257C6eb497ffe7f64842421f08daf7190859-257Cb92d2b234d35447093ff69aca6632ffe-257C1-257C0-257C638093984139939233-257CUnknown-257CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0-253D-257C3000-257C-257C-257C-26sdata-3Dd3D0CnyBV09ghc1hUTQaeV7xQ8qZEsqPnPNMsNZEikU-253D-26reserved-3D0%26d%3DDwMGaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DobLRpd5rOGSZoDmu2uIobdQBJPG59qilQP1I8bc-5feaMrKjp09PeCX4zDNwJFjt%26s%3DqTVjIj43Kv_qlaghVoein0K4rksnog5S1GVrPz5e7F0%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C24bd81cd233f4b1703bb08daf7e86ccc%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638094874880197796%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=2xMQlaw53YkBOZ3Kr9S%2FKBZr9Kwdx9KoOzczIjP6d70%3D&reserved=0>

Jürgen







On 13. Jan 2023, at 14:40, Andrzej Wichert <andreas.wichert at tecnico.ulisboa.pt><mailto:andreas.wichert at tecnico.ulisboa.pt> wrote:
Dear Juergen,
You make the same mistake at it was done in the earlier 1970. You identify deep learning with modern AI, the paper should be called instead "Annotated History of Deep Learning”
Otherwise, you ignore symbolical AI, like search, production systems, knowledge representation, search, planning etc., as if is not part of AI anymore (suggested by your title).
Best,
Andreas
--------------------------------------------------------------------------------------------------
Prof. Auxiliar Andreas Wichert
https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttp-3A__web.tecnico.ulisboa.pt_andreas.wichert_%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3Dh5Zy9Hk2IoWPt7me1mLhcYHEuJ55mmNOAppZKcivxAk%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=T9o8A%2BqpAnwm2ZU7NVpQ9cDbfT1%2FlHXRecj0BkMlKc4%3D&reserved=0<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__nam02.safelinks.protection.outlook.com_-3Furl-3Dhttps-253A-252F-252Furldefense.proofpoint.com-252Fv2-252Furl-253Fu-253Dhttp-2D3A-5F-5Fweb.tecnico.ulisboa.pt-5Fandreas.wichert-5F-2526d-253DDwIDaQ-2526c-253DslrrB7dE8n7gBJbeO0g-2DIQ-2526r-253DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ-2526m-253DoGn-2DOID5YOewbgo3j-5FHjFjI3I2N3hx-2Dw0hoIfLR-5FJJsn8q5UZDYAl5HOHPY-2D87N5-2526s-253Dh5Zy9Hk2IoWPt7me1mLhcYHEuJ55mmNOAppZKcivxAk-2526e-253D-26data-3D05-257C01-257Cjose-2540rubic.rutgers.edu-257C6eb497ffe7f64842421f08daf7190859-257Cb92d2b234d35447093ff69aca6632ffe-257C1-257C0-257C638093984139939233-257CUnknown-257CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0-253D-257C3000-257C-257C-257C-26sdata-3DT9o8A-252BqpAnwm2ZU7NVpQ9cDbfT1-252FlHXRecj0BkMlKc4-253D-26reserved-3D0%26d%3DDwMGaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DobLRpd5rOGSZoDmu2uIobdQBJPG59qilQP1I8bc-5feaMrKjp09PeCX4zDNwJFjt%26s%3D6RDYNtleFN3ggDV2C2xTzc93QqeBWJCL_wZGaFP_Drg%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C24bd81cd233f4b1703bb08daf7e86ccc%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638094874880197796%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=5MR41XEhnRrtaWmhmsye%2F0Ve9sInoUSM2rb1CMrLpdQ%3D&reserved=0>
-
https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__www.amazon.com_author_andreaswichert%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3Dw1RtYvs8dwtfvlTkHqP_P-74ITvUW2IiHLSai7br25U%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=O%2BrX17IhxFQcXK0VClZM6sJqHH5UEpEDXgQZGqUTtVk%3D&reserved=0<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__nam02.safelinks.protection.outlook.com_-3Furl-3Dhttps-253A-252F-252Furldefense.proofpoint.com-252Fv2-252Furl-253Fu-253Dhttps-2D3A-5F-5Fwww.amazon.com-5Fauthor-5Fandreaswichert-2526d-253DDwIDaQ-2526c-253DslrrB7dE8n7gBJbeO0g-2DIQ-2526r-253DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ-2526m-253DoGn-2DOID5YOewbgo3j-5FHjFjI3I2N3hx-2Dw0hoIfLR-5FJJsn8q5UZDYAl5HOHPY-2D87N5-2526s-253Dw1RtYvs8dwtfvlTkHqP-5FP-2D74ITvUW2IiHLSai7br25U-2526e-253D-26data-3D05-257C01-257Cjose-2540rubic.rutgers.edu-257C6eb497ffe7f64842421f08daf7190859-257Cb92d2b234d35447093ff69aca6632ffe-257C1-257C0-257C638093984139939233-257CUnknown-257CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0-253D-257C3000-257C-257C-257C-26sdata-3DO-252BrX17IhxFQcXK0VClZM6sJqHH5UEpEDXgQZGqUTtVk-253D-26reserved-3D0%26d%3DDwMGaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DobLRpd5rOGSZoDmu2uIobdQBJPG59qilQP1I8bc-5feaMrKjp09PeCX4zDNwJFjt%26s%3DqeeNmSzXEsL7Gmir4SyfP6-280-lTMKydqiTVXIB9iQ%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C24bd81cd233f4b1703bb08daf7e86ccc%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638094874880197796%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=eK6CUKXCzfQ1P5x6W2GwvcypttLzxOalIRbDvVlDlaU%3D&reserved=0>
Instituto Superior Técnico - Universidade de Lisboa
Campus IST-Taguspark
Avenida Professor Cavaco Silva                 Phone: +351  214233231
2744-016 Porto Salvo, Portugal


On 13 Jan 2023, at 08:13, Schmidhuber Juergen <juergen at idsia.ch><mailto:juergen at idsia.ch> wrote:


Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey):
https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__arxiv.org_abs_2212.11279%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3D6E5_tonSfNtoMPw1fvFOm8UFm7tDVH7un_kbogNG_1w%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=P5iVvvYBN4H26Bad7eJZAj9%2B%2B0dfWOPKQozWrsCLpXU%3D&reserved=0<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__nam02.safelinks.protection.outlook.com_-3Furl-3Dhttps-253A-252F-252Furldefense.proofpoint.com-252Fv2-252Furl-253Fu-253Dhttps-2D3A-5F-5Farxiv.org-5Fabs-5F2212.11279-2526d-253DDwIDaQ-2526c-253DslrrB7dE8n7gBJbeO0g-2DIQ-2526r-253DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ-2526m-253DoGn-2DOID5YOewbgo3j-5FHjFjI3I2N3hx-2Dw0hoIfLR-5FJJsn8q5UZDYAl5HOHPY-2D87N5-2526s-253D6E5-5FtonSfNtoMPw1fvFOm8UFm7tDVH7un-5FkbogNG-5F1w-2526e-253D-26data-3D05-257C01-257Cjose-2540rubic.rutgers.edu-257C6eb497ffe7f64842421f08daf7190859-257Cb92d2b234d35447093ff69aca6632ffe-257C1-257C0-257C638093984139939233-257CUnknown-257CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0-253D-257C3000-257C-257C-257C-26sdata-3DP5iVvvYBN4H26Bad7eJZAj9-252B-252B0dfWOPKQozWrsCLpXU-253D-26reserved-3D0%26d%3DDwMGaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DobLRpd5rOGSZoDmu2uIobdQBJPG59qilQP1I8bc-5feaMrKjp09PeCX4zDNwJFjt%26s%3DinGE0S7ESxL-96MpMftFzNX8XEZ2tZ3MEPGOgunZSQ8%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C24bd81cd233f4b1703bb08daf7e86ccc%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638094874880197796%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=B2I9wpFuy%2BGtUNSIEiV8A%2Bm8yqANe%2F5XicdWffuC950%3D&reserved=0>
https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3DXPnftI8leeqoElbWQIApFNQ2L4gDcrGy_eiJv2ZPYYk%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=DQkAmR9EaFS7TTEtJzpkumEsbjsQ%2BQNYcnrNs1umD%2BM%3D&reserved=0<https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__nam02.safelinks.protection.outlook.com_-3Furl-3Dhttps-253A-252F-252Furldefense.proofpoint.com-252Fv2-252Furl-253Fu-253Dhttps-2D3A-5F-5Fpeople.idsia.ch-5F-2D7Ejuergen-5Fdeep-2D2Dlearning-2D2Dhistory.html-2526d-253DDwIDaQ-2526c-253DslrrB7dE8n7gBJbeO0g-2DIQ-2526r-253DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ-2526m-253DoGn-2DOID5YOewbgo3j-5FHjFjI3I2N3hx-2Dw0hoIfLR-5FJJsn8q5UZDYAl5HOHPY-2D87N5-2526s-253DXPnftI8leeqoElbWQIApFNQ2L4gDcrGy-5FeiJv2ZPYYk-2526e-253D-26data-3D05-257C01-257Cjose-2540rubic.rutgers.edu-257C6eb497ffe7f64842421f08daf7190859-257Cb92d2b234d35447093ff69aca6632ffe-257C1-257C0-257C638093984139939233-257CUnknown-257CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0-253D-257C3000-257C-257C-257C-26sdata-3DDQkAmR9EaFS7TTEtJzpkumEsbjsQ-252BQNYcnrNs1umD-252BM-253D-26reserved-3D0%26d%3DDwMGaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DobLRpd5rOGSZoDmu2uIobdQBJPG59qilQP1I8bc-5feaMrKjp09PeCX4zDNwJFjt%26s%3DDRhgAM3MSK-OnvRU4GtfTKyrG3XTMgUAelt81LXuNW8%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C24bd81cd233f4b1703bb08daf7e86ccc%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638094874880197796%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=TaRDnkXAbJiT543YDuAQF2nVVYsqgiUuiEsmEi9XK58%3D&reserved=0>
This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch<mailto:juergen at idsia.ch> if you can spot any remaining error or have suggestions for improvements.
Happy New Year!
Jürgen


--
Stephen José Hanson
Professor, Psychology Department
Director, RUBIC (Rutgers University Brain Imaging Center)
Member, Executive Committee, RUCCS

--
Stephen José Hanson
Professor, Psychology Department
Director, RUBIC (Rutgers University Brain Imaging Center)
Member, Executive Committee, RUCCS
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230116/18e01ca2/attachment.html>


More information about the Connectionists mailing list