<div dir="ltr">I just tried it, Thomas and it will still say the same, I tried a little bit more, here is the screen shot:<div><br></div><div><img src="cid:ii_lds69yxe0" alt="image.png" width="562" height="221"><br><div><br clear="all"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div> </div><div>Regards,</div><div>Dr. M. Imad Khan</div></div></div><br></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, 5 Feb 2023 at 23:02, Thomas Trappenberg <<a href="mailto:tt@cs.dal.ca">tt@cs.dal.ca</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto">My first question to ChatGPT<div dir="auto"><br></div><div dir="auto">Thomas: what is heavier, 1kg of lead or 2000g of feathers?</div><div dir="auto"><br></div><div dir="auto">ChatGPT: They both weigh the same, 1kg</div><div dir="auto"><br></div><div dir="auto">This makes total sense for a high-dimensional non-causal frequency table. Lead, feathers, heavier, same, is likely to be a frequent correlation. 2000g (around 4.41 pounds for our American friends) does not really fit in. </div><div dir="auto"><br></div><div dir="auto">Cheers, Thomas </div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Feb 5, 2023, 2:36 a.m. Gary Marcus <<a href="mailto:gary.marcus@nyu.edu" rel="noreferrer noreferrer" target="_blank">gary.marcus@nyu.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div dir="ltr"></div><div dir="ltr"><img alt="image"></div><div dir="ltr"><br></div><div dir="ltr">A bit more history, on the possibility that it might be of use to future students of our contentious AI times, and in the spirit of the Elvis quote below:</div><div dir="ltr"><br></div><div dir="ltr">2015: Gary Marcus writes, somewhat loosely, in a trade book (The Future of the Brain)</div><div dir="ltr"><blockquote type="cite">“<span>Hierarchies of features are less suited to challenges such as language, inference, and high-level planning. </span><span>For example, as Noam Chomsky famously pointed out, language is filled with sentences you haven't seen before. </span><span>Pure classifier systems don't know what to do with such sentences. The talent of feature detectors -- in</span><span>  </span><span>identifying which member of some category something belongs to -- doesn't translate into understanding novel</span><span> </span><span>sentences, in which each sentence has its own unique meaning.”</span></blockquote></div><p>Sometime thereafter: Turing Award winner Geoff Hinton enshrines the quote on his own web page, with ridicule, as “My Favorite Gary Marcus quote”; people in the deep learning community circulate it on Facebook and Twitter, mocking Marcus.</p><div dir="ltr">October 2019: Geoff Hinton, based perhaps primarily on the quote, warns a crowd of researchers at Toronto to not waste their time listening to Marcus. (Hinton’s email bounces, because it was sent from the wrong address). Hinton’s view is that language has been solved, by Google Translate; in his eyes, Marcus is a moron. </div><div dir="ltr"><br></div><div dir="ltr">[Almost three years pass; ridicule of Marcus continues on major social media]</div><div dir="ltr"><br></div><div dir="ltr">February 2023: Hinton’s fellow Turing Award winner Yann LeCun unleashes a Tweetstorm, saying that “<span style="color:rgb(15,20,25);font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Helvetica,Arial,sans-serif;font-size:15px;white-space:pre-wrap">LLMs such as ChatGPT can eloquently spew complete nonsense. </span><span style="border:0px solid black;box-sizing:border-box;color:rgb(15,20,25);display:inline;font-stretch:inherit;font-size:15px;line-height:inherit;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Helvetica,Arial,sans-serif;margin:0px;padding:0px;white-space:pre-wrap;min-width:0px">Their grasp of reality is very </span><span style="border:0px solid black;box-sizing:border-box;color:rgb(15,20,25);display:inline;font-stretch:inherit;font-size:15px;line-height:inherit;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Helvetica,Arial,sans-serif;margin:0px;padding:0px;white-space:pre-wrap;min-width:0px">superficial” and that “</span><span style="color:rgb(15,20,25);font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Helvetica,Arial,sans-serif;font-size:15px;white-space:pre-wrap"> [LLM] make very stupid mistakes of common-sense that a 4 year-old, a chimp, a dog, or a cat would never make. </span><span style="border:0px solid black;box-sizing:border-box;color:rgb(15,20,25);display:inline;font-stretch:inherit;font-size:15px;line-height:inherit;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Helvetica,Arial,sans-serif;margin:0px;padding:0px;white-space:pre-wrap;min-width:0px">LLMs have a more </span><span style="border:0px solid black;box-sizing:border-box;color:rgb(15,20,25);display:inline;font-stretch:inherit;font-size:15px;line-height:inherit;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Helvetica,Arial,sans-serif;margin:0px;padding:0px;white-space:pre-wrap;min-width:0px">superficial</span><span style="border:0px solid black;box-sizing:border-box;color:rgb(15,20,25);display:inline;font-stretch:inherit;font-size:15px;line-height:inherit;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Helvetica,Arial,sans-serif;margin:0px;padding:0px;white-space:pre-wrap;min-width:0px"> understanding of the world than a house cat.”</span></div><div dir="ltr"><span style="border:0px solid black;box-sizing:border-box;color:rgb(15,20,25);display:inline;font-stretch:inherit;font-size:15px;line-height:inherit;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Helvetica,Arial,sans-serif;margin:0px;padding:0px;white-space:pre-wrap;min-width:0px"><br></span></div><div dir="ltr"><span style="color:rgb(15,20,25);font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Helvetica,Arial,sans-serif;font-size:15px;white-space:pre-wrap">Marcus receives many emails wondering whether LeCun has switched sides. On Twitter, people ask whether Marcus has hacked LeCun’s Twitter account.</span></div><div dir="ltr"><span style="border:0px solid black;box-sizing:border-box;color:rgb(15,20,25);display:inline;font-stretch:inherit;font-size:15px;line-height:inherit;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Helvetica,Arial,sans-serif;margin:0px;padding:0px;white-space:pre-wrap;min-width:0px"><br></span></div><div dir="ltr"><span style="border:0px solid black;box-sizing:border-box;color:rgb(15,20,25);display:inline;font-stretch:inherit;font-size:15px;line-height:inherit;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Helvetica,Arial,sans-serif;margin:0px;padding:0px;white-space:pre-wrap;min-width:0px">The quote from Marcus, at the bottom of Hinton’s home page, remains.</span></div><div dir="ltr"><br></div><div dir="ltr"><img alt="IMG_3771"></div><div dir="ltr"><br></div><div dir="ltr"><br></div><div dir="ltr"><br><blockquote type="cite">On Feb 3, 2023, at 02:15, Schmidhuber Juergen <<a href="mailto:juergen@idsia.ch" rel="noreferrer noreferrer noreferrer" target="_blank">juergen@idsia.ch</a>> wrote:<br><br></blockquote></div><blockquote type="cite"><div dir="ltr"><span>PS: the weirdest thing is that later Minsky & Papert published a famous book (1969) [M69] that cited neither Amari’s SGD-based deep learning (1967-68) nor the original layer-by-layer deep learning (1965) by Ivakhnenko & Lapa [DEEP1-2][DL2]. </span><br><span></span><br><span>Minsky & Papert's book [M69] showed that shallow NNs without hidden layers are very limited. Duh! That’s exactly why people like Ivakhnenko & Lapa and Amari had earlier overcome this problem through _deep_ learning with many learning layers. </span><br><span></span><br><span>Minsky & Papert apparently were unaware of this. Unfortunately, even later they failed to correct their book [T22]. </span><br><span></span><br><span>Much later, others took this as an opportunity to promulgate a rather self-serving revisionist history of deep learning [S20][DL3][DL3a][T22] that simply ignored pre-Minsky deep learning.</span><br><span></span><br><span>However, as Elvis Presley put it, "Truth is like the sun. You can shut it out for a time, but it ain't goin' away.” [T22]</span><br><span></span><br><span>Juergen</span><br><span></span><br><span></span><br><span></span><br><blockquote type="cite"><span>On 26. Jan 2023, at 16:29, Schmidhuber Juergen <<a href="mailto:juergen@idsia.ch" rel="noreferrer noreferrer noreferrer" target="_blank">juergen@idsia.ch</a>> wrote:</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>And in 1967-68, the same Shun-Ichi Amari trained multilayer perceptrons (MLPs) with many layers by stochastic gradient descent (SGD) in end-to-end fashion. See Sec. 7 of the survey: <a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html-232nddl&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=4LzhBNueqlX8EkcU7h_DxubfArfr6b5GHokpJlCSdmTq7ZPDMknduDgY5WCt_lhe&s=vtnXTzEYRRA1-iq260_cxSYhH8FdQaIWYVoaGdTkTBw&e=" rel="noreferrer noreferrer noreferrer" target="_blank">https://urldefense.proofpoint.com/v2/url?u=https-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html-232nddl&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=4LzhBNueqlX8EkcU7h_DxubfArfr6b5GHokpJlCSdmTq7ZPDMknduDgY5WCt_lhe&s=vtnXTzEYRRA1-iq260_cxSYhH8FdQaIWYVoaGdTkTBw&e=</a> </span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>Amari's implementation [GD2,GD2a] (with his student Saito) learned internal representations in a five layer MLP with two modifiable layers, which was trained to classify non-linearily separable pattern classes. </span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>Back then compute was billions of times more expensive than today.   </span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>To my knowledge, this was the first implementation of learning internal representations through SGD-based deep learning. </span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>If anyone knows of an earlier one then please let me know :)</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>Jürgen </span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><blockquote type="cite"><span>On 25. Jan 2023, at 16:44, Schmidhuber Juergen <<a href="mailto:juergen@idsia.ch" rel="noreferrer noreferrer noreferrer" target="_blank">juergen@idsia.ch</a>> wrote:</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Some are not aware of this historic tidbit in Sec. 4 of the survey: half a century ago, Shun-Ichi Amari published a learning recurrent neural network (1972) which was later called the Hopfield network.</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html-23rnn&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=4LzhBNueqlX8EkcU7h_DxubfArfr6b5GHokpJlCSdmTq7ZPDMknduDgY5WCt_lhe&s=E4HvMqgORTTmtoivOznAA1FsqYk0EqbAvZi1jQZPEbM&e=" rel="noreferrer noreferrer noreferrer" target="_blank">https://urldefense.proofpoint.com/v2/url?u=https-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html-23rnn&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=4LzhBNueqlX8EkcU7h_DxubfArfr6b5GHokpJlCSdmTq7ZPDMknduDgY5WCt_lhe&s=E4HvMqgORTTmtoivOznAA1FsqYk0EqbAvZi1jQZPEbM&e=</a> </span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Jürgen</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>On 13. Jan 2023, at 11:13, Schmidhuber Juergen <<a href="mailto:juergen@idsia.ch" rel="noreferrer noreferrer noreferrer" target="_blank">juergen@idsia.ch</a>> wrote:</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>Machine learning is the science of credit assignment. My new survey credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 survey): </span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__arxiv.org_abs_2212.11279&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=4LzhBNueqlX8EkcU7h_DxubfArfr6b5GHokpJlCSdmTq7ZPDMknduDgY5WCt_lhe&s=KaU8D1yHizw6UUsIuIba6AKBx5Ok5clZYo32bx-cPAs&e=" rel="noreferrer noreferrer noreferrer" target="_blank">https://urldefense.proofpoint.com/v2/url?u=https-3A__arxiv.org_abs_2212.11279&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=4LzhBNueqlX8EkcU7h_DxubfArfr6b5GHokpJlCSdmTq7ZPDMknduDgY5WCt_lhe&s=KaU8D1yHizw6UUsIuIba6AKBx5Ok5clZYo32bx-cPAs&e=</a> </span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=4LzhBNueqlX8EkcU7h_DxubfArfr6b5GHokpJlCSdmTq7ZPDMknduDgY5WCt_lhe&s=4Qj78cOJPkfxEDnytPDkfrCvsAbE5RvzpOb7t8ooLIw&e=" rel="noreferrer noreferrer noreferrer" target="_blank">https://urldefense.proofpoint.com/v2/url?u=https-3A__people.idsia.ch_-7Ejuergen_deep-2Dlearning-2Dhistory.html&d=DwIDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=4LzhBNueqlX8EkcU7h_DxubfArfr6b5GHokpJlCSdmTq7ZPDMknduDgY5WCt_lhe&s=4Qj78cOJPkfxEDnytPDkfrCvsAbE5RvzpOb7t8ooLIw&e=</a> </span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>This was already reviewed by several deep learning pioneers and other experts. Nevertheless, let me know under <a href="mailto:juergen@idsia.ch" rel="noreferrer noreferrer noreferrer" target="_blank">juergen@idsia.ch</a> if you can spot any remaining error or have suggestions for improvements.</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>Happy New Year!</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span>Jürgen</span><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote></blockquote><span></span><br><span></span><br></div></blockquote></div></blockquote></div>
</blockquote></div>