<div dir="ltr">Embeddings are just vectors of soft feature detectors and they are very good for NLP.  The quote on my webpage from Gary's 2015 chapter implies the opposite.<div><br></div><div>A few decades ago, everyone I knew then would have agreed that the ability to translate a sentence into many different languages was strong evidence that you understood it.</div><div>But once neural networks could do that, their critics moved the goalposts. An exception is Hector Levesque who defined the goalposts more sharply by saying that the ability to get pronoun references correct in Winograd sentences is a crucial test. Neural nets are improving at that but still have some way to go. Will Gary agree that when they can get pronoun references correct in Winograd sentences they really do understand? Or does he want to reserve the right to weasel out of that too?</div><div><br></div><div>Some people, like Gary, appear to be strongly opposed to neural networks because they do not fit their preconceived notions of how the mind should work.<br></div><div><div>I believe that any reasonable person would admit that if you ask a neural net to draw a picture of a hamster wearing a red hat and it draws such a picture, it understood the request.</div><div><br></div><div>Geoff</div><div><br></div><div><br></div><div><br><div><br></div></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Feb 2, 2022 at 1:38 PM Gary Marcus <<a href="mailto:gary.marcus@nyu.edu">gary.marcus@nyu.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div style="overflow-wrap: break-word;"><div dir="auto"><div dir="ltr"></div><div dir="ltr"><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px">Dear AI Hub, cc: Steven Hanson and Geoffrey Hinton, and the larger neural network community,</span></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal;min-height:22.9px"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px"></span><br></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px">There has been a lot of recent discussion on this list about framing and scientific integrity. Often the first step in restructuring narratives is to bully and dehumanize critics. The second is to misrepresent their position. People in positions of power are sometimes tempted to do this.</span></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal;min-height:22.9px"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px"></span><br></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px">The Hinton-Hanson interview that you just published is a real-time example of just that. It opens with a needless and largely content-free personal attack on a single scholar (me), with the explicit intention of discrediting that person. Worse, the only substantive thing it says is false.</span></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal;min-height:22.9px"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px"></span><br></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px">Hinton says “In 2015 he [Marcus] made a prediction that computers wouldn’t be able to do machine translation.”</span></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal;min-height:22.9px"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px"></span><br></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px">I never said any such thing. </span></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal;min-height:22.9px"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px"></span><br></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px">What I predicted, rather, was that multilayer perceptrons, as they existed then, would not (on their own, absent other mechanisms) </span><span style="font-family:UICTFontTextStyleItalicBody;font-style:italic;font-size:17.46px">understand</span><span style="font-family:UICTFontTextStyleBody;font-size:17.46px"> language. Seven years later, they still haven’t, except in the most superficial way.   </span></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal;min-height:22.9px"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px"></span><br></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px">I made no comment whatsoever about machine translation, which I view as a separate problem, solvable to a certain degree by correspondance without semantics. </span></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal;min-height:22.9px"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px"></span><br></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px">I specifically tried to clarify Hinton’s confusion in 2019, but, disappointingly, he has continued to purvey misinformation despite that clarification. Here is what I wrote privately to him then, which should have put the matter to rest:</span></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal;min-height:22.9px"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px"></span><br></div><div style="margin:0px 0px 0px 36px;font-stretch:normal;font-size:17.5px;line-height:normal"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px">You have taken a single out of context quote [from 2015] and misrepresented it. The quote, which you have prominently displayed at the bottom on your own web page, says:</span></div><div style="margin:0px 0px 0px 36px;font-stretch:normal;font-size:17.5px;line-height:normal;min-height:22.9px"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px"></span><br></div><div style="margin:0px 0px 0px 72px;font-stretch:normal;font-size:17.5px;line-height:normal"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px">Hierarchies of features are less suited to challenges such as language, inference, and high-level planning. For example, as Noam Chomsky famously pointed out, language is filled with sentences you haven't seen before. Pure classifier systems don't know what to do with such sentences. The talent of feature detectors -- in  identifying which member of some category something belongs to -- doesn't translate into understanding novel  sentences, in which each sentence has its own unique meaning. </span></div><div style="margin:0px 0px 0px 36px;font-stretch:normal;font-size:17.5px;line-height:normal;min-height:22.9px"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px"></span><br></div><div style="margin:0px 0px 0px 36px;font-stretch:normal;font-size:17.5px;line-height:normal"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px">It does </span><span style="font-family:UICTFontTextStyleItalicBody;font-style:italic;font-size:17.46px">not</span><span style="font-family:UICTFontTextStyleBody;font-size:17.46px"> say "neural nets would not be able to deal with novel sentences"; it says that hierachies of features detectors (on their own, if you read the context of the essay) would have trouble </span><span style="font-family:UICTFontTextStyleItalicBody;font-style:italic;font-size:17.46px">understanding </span><span style="font-family:UICTFontTextStyleBody;font-size:17.46px">novel sentences.  </span></div><div style="margin:0px 0px 0px 36px;font-stretch:normal;font-size:17.5px;line-height:normal;min-height:22.9px"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px"></span><br></div><div style="margin:0px 0px 0px 36px;font-stretch:normal;font-size:17.5px;line-height:normal"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px">Google Translate does yet not </span><span style="font-family:UICTFontTextStyleItalicBody;font-style:italic;font-size:17.46px">understand</span><span style="font-family:UICTFontTextStyleBody;font-size:17.46px"> the content of the sentences is translates. It cannot reliably answer questions about who did what to whom, or why, it cannot infer the order of the events in paragraphs, it can't determine the internal consistency of those events, and so forth.</span></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal;min-height:22.9px"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px"></span><br></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px">Since then, a number of scholars, such as the the computational linguist Emily Bender, have made similar points, and indeed current LLM difficulties with misinformation, incoherence and fabrication all follow from these concerns. Quoting from Bender’s prizewinning 2020 ACL article on the matter with Alexander Koller, <a href="https://aclanthology.org/2020.acl-main.463.pdf" target="_blank">https://aclanthology.org/2020.acl-main.463.pdf</a>, also emphasizing issues of understanding and meaning:</span></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal;min-height:22.9px"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px"></span><br></div><div style="margin:0px 0px 0px 36px;font-stretch:normal;font-size:17.5px;line-height:normal"><span style="font-family:UICTFontTextStyleItalicBody;font-style:italic;font-size:17.46px">The success of the large neural language models on many NLP tasks is exciting. However, we find that these successes sometimes lead to hype in which these models are being described as “understanding” language or capturing “meaning”. In this position paper, we argue that a system trained only on form has a priori no way to learn meaning. .. a clear understanding of the distinction between form and meaning will help guide the field towards better science around natural language understanding. </span></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal;min-height:22.9px"><span style="font-family:UICTFontTextStyleItalicBody;font-style:italic;font-size:17.46px"></span><br></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px">Her later article with Gebru on language models “stochastic parrots” is in some ways an extension of this point; machine translation requires mimicry, true understanding (which is what I was discussing in 2015) requires something deeper than that. </span></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal;min-height:22.9px"><span style="font-family:UICTFontTextStyleItalicBody;font-style:italic;font-size:17.46px"></span><br></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px">Hinton’s intellectual error here is in equating machine translation with the deeper comprehension that robust natural language understanding will require; as Bender and Koller observed, the two appear not to be the same. (There is a longer discussion of the relation between language understanding and machine translation, and why the latter has turned out to be more approachable than the former, in my 2019 book with Ernest Davis).</span></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal;min-height:22.9px"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px"></span><br></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px">More broadly, Hinton’s ongoing dismissiveness of research from perspectives other than his own (e.g. linguistics) have done the field a disservice. </span></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal;min-height:22.9px"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px"></span><br></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px">As Herb Simon once observed, science does not have to be zero-sum.</span></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal;min-height:22.9px"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px"></span><br></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px">Sincerely,</span></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px">Gary Marcus</span></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px">Professor Emeritus</span></div><div style="margin:0px;font-stretch:normal;font-size:17.5px;line-height:normal"><span style="font-family:UICTFontTextStyleBody;font-size:17.46px">New York University</span></div></div><div dir="ltr"><br><blockquote type="cite">On Feb 2, 2022, at 06:12, AIhub <<a href="mailto:aihuborg@gmail.com" target="_blank">aihuborg@gmail.com</a>> wrote:<br><br></blockquote></div><blockquote type="cite"><div dir="ltr"><div dir="ltr"><div>Stephen Hanson in conversation with Geoff Hinton</div><div><br></div><div>In the latest episode of this video series for <a href="http://AIhub.org" target="_blank">AIhub.org</a>, Stephen Hanson talks to 

Geoff Hinton about neural networks, backpropagation, overparameterization, digit recognition, voxel cells, syntax and semantics, Winograd sentences, and more.<br><div><br></div><div>You can watch the discussion, and read the transcript, here:<br clear="all"><div><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__aihub.org_2022_02_02_what-2Dis-2Dai-2Dstephen-2Dhanson-2Din-2Dconversation-2Dwith-2Dgeoff-2Dhinton_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=OY_RYGrfxOqV7XeNJDHuzE--aEtmNRaEyQ0VJkqFCWw&e=" target="_blank">https://aihub.org/2022/02/02/what-is-ai-stephen-hanson-in-conversation-with-geoff-hinton/</a><font face="arial, sans-serif"><br></font></div><div><br></div><div><font face="arial, sans-serif">About AIhub: </font></div><div><font face="arial, sans-serif"><span style="font-variant-numeric:normal;font-variant-east-asian:normal;background-color:transparent;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap">AIhub is a non-profit dedicated to connecting the AI community to the public by providing free, high-quality information through <a href="http://AIhub.org" target="_blank">AIhub.org</a> (</span><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__aihub.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=IKFanqeMi73gOiS7yD-X_vRx_OqDAwv1Il5psrxnhIA&e=" style="text-decoration-line:none" target="_blank"><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">https://aihub.org/</span></a><span style="font-variant-numeric:normal;font-variant-east-asian:normal;background-color:transparent;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap">). We help researchers publish the latest AI news, summaries of their work, opinion pieces, tutorials and more.  We are supported by many leading scientific organizations in AI, namely </span><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__aaai.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=wBvjOWTzEkbfFAGNj9wOaiJlXMODmHNcoWO5JYHugS0&e=" style="text-decoration-line:none" target="_blank"><span style="color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">AAAI</span></a><span style="font-variant-numeric:normal;font-variant-east-asian:normal;background-color:transparent;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap">, </span><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__neurips.cc_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=3-lOHXyu8171pT_UE9hYWwK6ft4I-cvYkuX7shC00w0&e=" style="text-decoration-line:none" target="_blank"><span style="color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">NeurIPS</span></a><span style="font-variant-numeric:normal;font-variant-east-asian:normal;background-color:transparent;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap">, </span><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__icml.cc_imls_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=JJyjwIpPy9gtKrZzBMbW3sRMh3P3Kcw-SvtxG35EiP0&e=" style="text-decoration-line:none" target="_blank"><span style="color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">ICML</span></a><span style="font-variant-numeric:normal;font-variant-east-asian:normal;background-color:transparent;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap">, </span><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.journals.elsevier.com_artificial-2Dintelligence&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=eWrRCVWlcbySaH3XgacPpi0iR0-NDQYCLJ1x5yyMr8U&e=" style="text-decoration-line:none" target="_blank"><span style="color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">AIJ</span></a><span style="font-variant-numeric:normal;font-variant-east-asian:normal;background-color:transparent;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap">/</span><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.journals.elsevier.com_artificial-2Dintelligence&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=eWrRCVWlcbySaH3XgacPpi0iR0-NDQYCLJ1x5yyMr8U&e=" style="text-decoration-line:none" target="_blank"><span style="color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">IJCAI</span></a><span style="font-variant-numeric:normal;font-variant-east-asian:normal;background-color:transparent;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap">, </span><a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__sigai.acm.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=7rC6MJFaMqOms10EYDQwfnmX-zuVNhu9fz8cwUwiLGQ&e=" style="text-decoration-line:none" target="_blank"><span style="color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">ACM SIGAI</span></a><span style="font-variant-numeric:normal;font-variant-east-asian:normal;background-color:transparent;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap">, EurAI/AICOMM, </span><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__claire-2Dai.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=66ZofDIhuDba6Fb0LhlMGD3XbBhU7ez7dc3HD5-pXec&e=" style="text-decoration-line:none" target="_blank"><span style="color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">CLAIRE</span></a><span style="font-variant-numeric:normal;font-variant-east-asian:normal;background-color:transparent;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap"> and </span><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.robocup.org__&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=bBI6GRq--MHLpIIahwoVN8iyXXc7JAeH3kegNKcFJc0&e=" style="text-decoration-line:none" target="_blank"><span style="color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">RoboCup</span></a><span style="font-variant-numeric:normal;font-variant-east-asian:normal;background-color:transparent;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap">.</span><br></font></div><div><font face="arial, sans-serif"><span style="font-variant-numeric:normal;font-variant-east-asian:normal;background-color:transparent;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap">Twitter: </span><span style="color:rgb(0,0,0);white-space:pre-wrap">@aihuborg</span></font></div></div></div><div dir="ltr"><div dir="ltr"></div></div></div>
</div></blockquote></div></div></blockquote></div>