Connectionists: Annotated History of Modern AI and Deep Learning

Asim Roy ASIM.ROY at asu.edu
Wed Jan 18 22:50:10 EST 2023


Responding to Stephen José Hanson’s comment:



“Nonetheless, as you can guess, I am countering your claim: your prediction is not going to happen.. there will be no merging of symbols and NN in the near or distant future, because it would be useless.”



Steve, even a simple CNN that recognizes just Cats and Dogs sends an output signal that is symbolic. So, a basic NN classifier is a neuro-symbolic system. Such systems are there already, contrary to your statement above. One of the  next steps is to extract more symbolic information from these systems. That’s what you find in Hinton’s GLOM approach – finding parts of objects. Once you find those parts, which essentially correspond to certain abstractions (e.g. a leg, an eye of a cat), you can then transmit that information in symbolic form to whoever is receiving the whole object information. Beyond GLOM, there are many other methods in computer vision that are trying to do the same thing – that is, extract part information. I can send you references if you want. So neuro-symbolic stuff is already happening, contrary to what you are saying. Gary’s reference to the IBM conference indicates this is an emerging topic in AI. And, of course, you also have the GLOM type work. In addition, DARPA’s conception of Explainable AI (Explainable Artificial Intelligence (darpa.mil)<https://www.darpa.mil/program/explainable-artificial-intelligence>) was also neuro-symbolic as shown in the figure below. The idea is to identify objects based on their parts. So, the figure below says that it’s a cat because it has fur, whiskers, and claws plus an unlabeled visual feature.



Below are also two figures from Doran et al. 2017 that explains how neuro-symbolic would work. The second figure, Fig. 5, shows how a reasoning system might work. And that’s very similar to how we reason in our heads. Hope this helps.



The way forward is indeed neuro-symbolic, as Gary said, and it’s happening now, with perhaps Hinton’s GLOM showing the way.

Doran, D., Schulz, S., & Besold, T. R. (2017). What does explainable AI really mean? A new conceptualization of perspectives. arXiv preprint arXiv:1710.00794.

Asim Roy
Professor, Information Systems
Arizona State University
Lifeboat Foundation Bios: Professor Asim Roy<https://lifeboat.com/ex/bios.asim.roy>
Asim Roy | iSearch (asu.edu)<https://isearch.asu.edu/profile/9973>


[Timeline  Description automatically generated]

[cid:image002.png at 01D92B7E.0A81C250]

[cid:image003.png at 01D92B7E.0A81C250]


======================================================================================================================================================

Gary,

"vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here"


As usual you are distorting the point here. What Juergen is chronicling is about WORKING AI--(the big bang aside for a moment) and I think we do agree on some of the LLM nonsense that is in a nyperbolic loop at this point.

But AI from the 70s, frankly failed including NN.   Expert systems, the apex application...couldn't even suggest decent wines.
langauge understanding, planning etc.. please point to us what working systems are you talking about? These things are broken. Why would we try to blend broken systems with a classifier that has human to super human classification accuracy? What would it do?pick up that last 1% of error?  Explain the VGG? We don't know how these DLs work in any case... good luck on that! (see comments on this topic with Yann and Me in the recent WIAS series!)

Frankly, the last gasp of AI in the 70s was the US gov 5th generation response in Austin Texas--MCC.(launched in the early 80s).. after shaking down 100s of companies 1M$ a year.. and plowing all the monies into reasoning, planning and NL KRep.. oh yeah.. Doug Lenat.. who predicted every year we went down there that CYC would become intelligent in 2001! maybe 2010! I was part of the group from Bell Labs that was supposed to provide analysis and harvest the AI fiesta each year.. there was nothing. What survived of CYC, and NL and reasoning breakthroughs? There was nothing. Nothing survived this money party.

So here we are where NN comes back (just as CYC was to burst into intelligence!) under rather unlikely and seemingly marginal tweeks to NN backprop algo, and works pretty much daily with breakthroughs.. ignoring LLM for the moment.. which I believe are likely to crash in on themselves.

Nonetheless, as you can guess, I am countering your claim: your prediction is not going to happen.. there will be no merging of symbols and NN in the near or distant future, because it would be useless.

Best,

Steve
On 1/14/23 07:04, Gary Marcus wrote:

Dear Juergen,



You have made a good case that the history of deep learning is often misrepresented. But, by parity of reasoning, a few pointers to a tiny fraction of the work done in symbolic AI does not in any way make this a thorough and balanced exercise with respect to the field as a whole.



I am 100% with Andrzej Wichert, in thinking that vast areas of AI such as planning, reasoning, natural language understanding, robotics and knowledge representation are treated very superficially here. A few pointers to theorem proving and the like does not solve that.



Your essay is a fine if opinionated history of deep learning, with a special emphasis on your own work, but of somewhat limited value beyond a few terse references in explicating other approaches to AI. This would be ok if the title and aspiration didn’t aim for as a whole; if you really want the paper to reflect the field as a whole, and the ambitions of the title, you have more work to do.



My own hunch is that in a decade, maybe much sooner, a major emphasis of the field will be on neurosymbolic integration. Your own startup is heading in that direction, and the commericial desire to make LLMs reliable and truthful will also push in that direction.

Historians looking back on this paper will see too little about that roots of that trend documented here.



Gary



On Jan 14, 2023, at 12:42 AM, Schmidhuber Juergen <juergen at idsia.ch><mailto:juergen at idsia.ch> wrote:



Dear Andrzej, thanks, but come on, the report cites lots of “symbolic” AI from theorem proving (e.g., Zuse 1948) to later surveys of expert systems and “traditional" AI. Note that Sec. 18 and Sec. 19 go back even much further in time (not even speaking of Sec. 20). The survey also explains why AI histories written in the 1980s/2000s/2020s differ. Here again the table of contents:



Sec. 1: Introduction

Sec. 2: 1676: The Chain Rule For Backward Credit Assignment

Sec. 3: Circa 1800: First Neural Net (NN) / Linear Regression / Shallow Learning

Sec. 4: 1920-1925: First Recurrent NN (RNN) Architecture. ~1972: First Learning RNNs

Sec. 5: 1958: Multilayer Feedforward NN (without Deep Learning)

Sec. 6: 1965: First Deep Learning

Sec. 7: 1967-68: Deep Learning by Stochastic Gradient Descent

Sec. 8: 1970: Backpropagation. 1982: For NNs. 1960: Precursor.

Sec. 9: 1979: First Deep Convolutional NN (1969: Rectified Linear Units)

Sec. 10: 1980s-90s: Graph NNs / Stochastic Delta Rule (Dropout) / More RNNs / Etc

Sec. 11: Feb 1990: Generative Adversarial Networks / Artificial Curiosity / NN Online Planners

Sec. 12: April 1990: NNs Learn to Generate Subgoals / Work on Command

Sec. 13: March 1991: NNs Learn to Program NNs. Transformers with Linearized Self-Attention

Sec. 14: April 1991: Deep Learning by Self-Supervised Pre-Training. Distilling NNs

Sec. 15: June 1991: Fundamental Deep Learning Problem: Vanishing/Exploding Gradients

Sec. 16: June 1991: Roots of Long Short-Term Memory / Highway Nets / ResNets

Sec. 17: 1980s-: NNs for Learning to Act Without a Teacher

Sec. 18: It's the Hardware, Stupid!

Sec. 19: But Don't Neglect the Theory of AI (Since 1931) and Computer Science

Sec. 20: The Broader Historic Context from Big Bang to Far Future

Sec. 21: Acknowledgments

Sec. 22: 555+ Partially Annotated References (many more in the award-winning survey [DL1])



Tweet: https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__twitter.com_SchmidhuberAI_status_1606333832956973060-3Fcxt-3DHHwWiMC8gYiH7MosAAAA%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3DnWCXLKazOjmixYrJVR0CMlR12PasGbAd8bsS6VZ10bk%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=d3D0CnyBV09ghc1hUTQaeV7xQ8qZEsqPnPNMsNZEikU%3D&reserved=0<https://urldefense.com/v3/__https:/nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.proofpoint.com*2Fv2*2Furl*3Fu*3Dhttps-3A__twitter.com_SchmidhuberAI_status_1606333832956973060-3Fcxt-3DHHwWiMC8gYiH7MosAAAA*26d*3DDwIDaQ*26c*3DslrrB7dE8n7gBJbeO0g-IQ*26r*3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ*26m*3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5*26s*3DnWCXLKazOjmixYrJVR0CMlR12PasGbAd8bsS6VZ10bk*26e*3D&data=05*7C01*7Cjose*40rubic.rutgers.edu*7C6eb497ffe7f64842421f08daf7190859*7Cb92d2b234d35447093ff69aca6632ffe*7C1*7C0*7C638093984139939233*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C3000*7C*7C*7C&sdata=d3D0CnyBV09ghc1hUTQaeV7xQ8qZEsqPnPNMsNZEikU*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSU!!IKRxdwAv5BmarQ!YLfwpD0W2xSYGrrWRy4q0vbCNy1WUd6svPXZDt8ka9C4MCQ0BZk9qmNNrXm7i8KfbhGq5IwFa4cjKCl8EJE$>



Jürgen











On 13. Jan 2023, at 14:40, Andrzej Wichert <andreas.wichert at tecnico.ulisboa.pt><mailto:andreas.wichert at tecnico.ulisboa.pt> wrote:

Dear Juergen,

You make the same mistake at it was done in the earlier 1970. You identify deep learning with modern AI, the paper should be called instead "Annotated History of Deep Learning”

Otherwise, you ignore symbolical AI, like search, production systems, knowledge representation, search, planning etc., as if is not part of AI anymore (suggested by your title).

Best,

Andreas

--------------------------------------------------------------------------------------------------

Prof. Auxiliar Andreas Wichert

https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttp-3A__web.tecnico.ulisboa.pt_andreas.wichert_%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3Dh5Zy9Hk2IoWPt7me1mLhcYHEuJ55mmNOAppZKcivxAk%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=T9o8A%2BqpAnwm2ZU7NVpQ9cDbfT1%2FlHXRecj0BkMlKc4%3D&reserved=0<https://urldefense.com/v3/__https:/nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.proofpoint.com*2Fv2*2Furl*3Fu*3Dhttp-3A__web.tecnico.ulisboa.pt_andreas.wichert_*26d*3DDwIDaQ*26c*3DslrrB7dE8n7gBJbeO0g-IQ*26r*3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ*26m*3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5*26s*3Dh5Zy9Hk2IoWPt7me1mLhcYHEuJ55mmNOAppZKcivxAk*26e*3D&data=05*7C01*7Cjose*40rubic.rutgers.edu*7C6eb497ffe7f64842421f08daf7190859*7Cb92d2b234d35447093ff69aca6632ffe*7C1*7C0*7C638093984139939233*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C3000*7C*7C*7C&sdata=T9o8A*2BqpAnwm2ZU7NVpQ9cDbfT1*2FlHXRecj0BkMlKc4*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!IKRxdwAv5BmarQ!YLfwpD0W2xSYGrrWRy4q0vbCNy1WUd6svPXZDt8ka9C4MCQ0BZk9qmNNrXm7i8KfbhGq5IwFa4cj9FEtvLQ$>

-

https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__www.amazon.com_author_andreaswichert%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3Dw1RtYvs8dwtfvlTkHqP_P-74ITvUW2IiHLSai7br25U%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=O%2BrX17IhxFQcXK0VClZM6sJqHH5UEpEDXgQZGqUTtVk%3D&reserved=0<https://urldefense.com/v3/__https:/nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.proofpoint.com*2Fv2*2Furl*3Fu*3Dhttps-3A__www.amazon.com_author_andreaswichert*26d*3DDwIDaQ*26c*3DslrrB7dE8n7gBJbeO0g-IQ*26r*3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ*26m*3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5*26s*3Dw1RtYvs8dwtfvlTkHqP_P-74ITvUW2IiHLSai7br25U*26e*3D&data=05*7C01*7Cjose*40rubic.rutgers.edu*7C6eb497ffe7f64842421f08daf7190859*7Cb92d2b234d35447093ff69aca6632ffe*7C1*7C0*7C638093984139939233*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C3000*7C*7C*7C&sdata=O*2BrX17IhxFQcXK0VClZM6sJqHH5UEpEDXgQZGqUTtVk*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!IKRxdwAv5BmarQ!YLfwpD0W2xSYGrrWRy4q0vbCNy1WUd6svPXZDt8ka9C4MCQ0BZk9qmNNrXm7i8KfbhGq5IwFa4cju4E4bKQ$>

Instituto Superior Técnico - Universidade de Lisboa

Campus IST-Taguspark

Avenida Professor Cavaco Silva                 Phone: +351  214233231

2744-016 Porto Salvo, Portugal

On 13 Jan 2023, at 08:13, Schmidhuber Juergen <juergen at idsia.ch><mailto:juergen at idsia.ch> wrote:

Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey):

https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__arxiv.org_abs_2212.11279%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3D6E5_tonSfNtoMPw1fvFOm8UFm7tDVH7un_kbogNG_1w%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=P5iVvvYBN4H26Bad7eJZAj9%2B%2B0dfWOPKQozWrsCLpXU%3D&reserved=0<https://urldefense.com/v3/__https:/nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.proofpoint.com*2Fv2*2Furl*3Fu*3Dhttps-3A__arxiv.org_abs_2212.11279*26d*3DDwIDaQ*26c*3DslrrB7dE8n7gBJbeO0g-IQ*26r*3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ*26m*3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5*26s*3D6E5_tonSfNtoMPw1fvFOm8UFm7tDVH7un_kbogNG_1w*26e*3D&data=05*7C01*7Cjose*40rubic.rutgers.edu*7C6eb497ffe7f64842421f08daf7190859*7Cb92d2b234d35447093ff69aca6632ffe*7C1*7C0*7C638093984139939233*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C3000*7C*7C*7C&sdata=P5iVvvYBN4H26Bad7eJZAj9*2B*2B0dfWOPKQozWrsCLpXU*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!IKRxdwAv5BmarQ!YLfwpD0W2xSYGrrWRy4q0vbCNy1WUd6svPXZDt8ka9C4MCQ0BZk9qmNNrXm7i8KfbhGq5IwFa4cjfVkq2_w$>

https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html%26d%3DDwIDaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5%26s%3DXPnftI8leeqoElbWQIApFNQ2L4gDcrGy_eiJv2ZPYYk%26e%3D&data=05%7C01%7Cjose%40rubic.rutgers.edu%7C6eb497ffe7f64842421f08daf7190859%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C638093984139939233%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=DQkAmR9EaFS7TTEtJzpkumEsbjsQ%2BQNYcnrNs1umD%2BM%3D&reserved=0<https://urldefense.com/v3/__https:/nam02.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.proofpoint.com*2Fv2*2Furl*3Fu*3Dhttps-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html*26d*3DDwIDaQ*26c*3DslrrB7dE8n7gBJbeO0g-IQ*26r*3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ*26m*3DoGn-OID5YOewbgo3j_HjFjI3I2N3hx-w0hoIfLR_JJsn8q5UZDYAl5HOHPY-87N5*26s*3DXPnftI8leeqoElbWQIApFNQ2L4gDcrGy_eiJv2ZPYYk*26e*3D&data=05*7C01*7Cjose*40rubic.rutgers.edu*7C6eb497ffe7f64842421f08daf7190859*7Cb92d2b234d35447093ff69aca6632ffe*7C1*7C0*7C638093984139939233*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C3000*7C*7C*7C&sdata=DQkAmR9EaFS7TTEtJzpkumEsbjsQ*2BQNYcnrNs1umD*2BM*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJQ!!IKRxdwAv5BmarQ!YLfwpD0W2xSYGrrWRy4q0vbCNy1WUd6svPXZDt8ka9C4MCQ0BZk9qmNNrXm7i8KfbhGq5IwFa4cjZ2uViQU$>

This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under juergen at idsia.ch<mailto:juergen at idsia.ch> if you can spot any remaining error or have suggestions for improvements.

Happy New Year!

Jürgen



--

Stephen José Hanson

Professor, Psychology Department

Director, RUBIC (Rutgers University Brain Imaging Center)

Member, Executive Committee, RUCCS

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230119/ffb95808/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 259567 bytes
Desc: image001.png
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230119/ffb95808/attachment-0003.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image002.png
Type: image/png
Size: 169238 bytes
Desc: image002.png
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230119/ffb95808/attachment-0004.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image003.png
Type: image/png
Size: 195221 bytes
Desc: image003.png
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230119/ffb95808/attachment-0005.png>


More information about the Connectionists mailing list