<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
</head>
<body dir="ltr">
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
I would say that Geoff's example still does not go beyond "Chinese room capabilities". I lack reassuring formulation/evidence that there is anything/something/nothing beyond "Chinese room" type feedforward intelligence.<br>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<span style="color: rgb(0, 0, 0); font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt;">András</span><br>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<p style="background-color:rgb(255, 255, 255);margin:0in 0in 0.0001pt;font-size:11pt;font-family:Calibri, sans-serif">
<span style="margin:0px;font-family:Calibri, Helvetica, sans-serif">------------------------------------</span></p>
<div style="margin:0px;background-color:rgb(255, 255, 255)">Andras Lorincz<br>
</div>
<div style="margin:0px;background-color:rgb(255, 255, 255)">http://nipg.inf.elte.hu/<br>
</div>
<div style="margin:0px;background-color:rgb(255, 255, 255)">Fellow of the European Association for Artificial Intelligence<br>
</div>
<div style="margin:0px;background-color:rgb(255, 255, 255)">Department of Artificial Intelligence<br>
</div>
<div style="margin:0px;background-color:rgb(255, 255, 255)">Faculty of Informatics<br>
</div>
<div style="margin:0px;background-color:rgb(255, 255, 255)">Eotvos Lorand University<br>
</div>
<span style="margin:0px;background-color:rgb(255, 255, 255)">Budapest, Hungary</span><br>
</div>
<div id="appendonsend"></div>
<hr style="display:inline-block;width:98%" tabindex="-1">
<div id="divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size:11pt" color="#000000"><b>From:</b> Connectionists <connectionists-bounces@mailman.srv.cs.cmu.edu> on behalf of Gary Marcus <gary.marcus@nyu.edu><br>
<b>Sent:</b> Thursday, February 3, 2022 5:25 PM<br>
<b>To:</b> Danko Nikolic <danko.nikolic@gmail.com><br>
<b>Cc:</b> connectionists@mailman.srv.cs.cmu.edu <connectionists@mailman.srv.cs.cmu.edu>; AIhub <aihuborg@gmail.com><br>
<b>Subject:</b> Re: Connectionists: Stephen Hanson in conversation with Geoff Hinton</font>
<div> </div>
</div>
<div class="" style="word-wrap:break-word; line-break:after-white-space">Dear Danko,
<div class=""><br class="">
</div>
<div class="">Well said. I had a somewhat similar response to Jeff Dean’s 2021 TED talk, in which he said (paraphrasing from memory, because I don’t remember the precise words) that the famous 200 Quoc Le unsupervised model [<a href="https://static.googleusercontent.com/media/research.google.com/en//archive/unsupervised_icml2012.pdf" class="">https://static.googleusercontent.com/media/research.google.com/en//archive/unsupervised_icml2012.pdf</a>]
 had learned the concept of a ca. In reality the model had clustered together some catlike images based on the image statistics that it had extracted, but it was a long way from a full, counterfactual-supporting concept of a cat, much as you describe below. </div>
<div class=""><br class="">
</div>
<div class="">I fully agree with you that the reason for even having a semantics is as you put it, "to 1) learn with a few examples and 2) apply the knowledge to a broad set of situations.” GPT-3 sometimes gives the appearance of having done so, but it falls
 apart under close inspection, so the problem remains unsolved.</div>
<div class="">
<div class=""><br class="">
</div>
<div class="">Gary<br class="">
<div><br class="">
<blockquote type="cite" class="">
<div class="">On Feb 3, 2022, at 3:19 AM, Danko Nikolic <<a href="mailto:danko.nikolic@gmail.com" class="">danko.nikolic@gmail.com</a>> wrote:</div>
<br class="x_Apple-interchange-newline">
<div class="">
<div dir="ltr" class="">G. Hinton wrote: "I believe that any reasonable person would admit that if you ask a neural net to draw a picture of a hamster wearing a red hat and it draws such a picture, it understood the request."
<div class=""><br class="">
</div>
<div class="">I would like to suggest why drawing a hamster with a red hat does not necessarily imply understanding of the statement "hamster wearing a red hat".</div>
<div class="">To understand that "hamster wearing a red hat" would mean inferring, in newly emerging situations of this hamster, all the real-life implications that the red hat brings to the little animal.</div>
<div class=""><br class="">
</div>
<div class="">
<div class="">What would happen to the hat if the hamster rolls on its back? (Would the hat fall off?)</div>
</div>
<div class="">What would happen to the red hat when the hamster enters its lair? (Would the hat fall off?)</div>
<div class="">What would happen to that hamster when it goes foraging? (Would the red hat have an influence on finding food?)</div>
<div class="">What would happen in a situation of being chased by a predator? (Would it be easier for predators to spot the hamster?)</div>
<div class=""><br class="">
</div>
<div class="">...and so on.</div>
<div class=""><br class="">
</div>
<div class="">Countless many questions can be asked. One has understood "hamster wearing a red hat" only if one can answer reasonably well many of such real-life relevant questions. Similarly, a student has understood materias in a class only if they can apply
 the materials in real-life situations (e.g., applying Pythagora's theorem). If a student gives a correct answer to a multiple choice question, we don't know whether the student understood the material or whether this was just rote learning (often, it is rote
 learning). <br class="">
</div>
<div class=""><br class="">
</div>
<div class="">I also suggest that understanding also comes together with effective learning: We store new information in such a way that we can recall it later and use it effectively  i.e., make good inferences in newly emerging situations based on this knowledge.</div>
<div class=""><br class="">
</div>
<div class="">In short: Understanding makes us humans able to 1) learn with a few examples and 2) apply the knowledge to a broad set of situations. </div>
<div class=""><br class="">
</div>
<div class="">No neural network today has such capabilities and we don't know how to give them such capabilities. Neural networks need large amounts of training examples that cover a large variety of situations and then the networks can only deal with what
 the training examples have already covered. Neural networks cannot extrapolate in that 'understanding' sense.</div>
<div class=""><br class="">
</div>
<div class="">I suggest that understanding truly extrapolates from a piece of knowledge. It is not about satisfying a task such as translation between languages or drawing hamsters with hats. It is how you got the capability to complete the task: Did you only
 have a few examples that covered something different but related and then you extrapolated from that knowledge? If yes, this is going in the direction of understanding. Have you seen countless examples and then interpolated among them? Then perhaps it is not
 understanding.</div>
<div class=""><br class="">
</div>
<div class="">So, for the case of drawing a hamster wearing a red hat, understanding perhaps would have taken place if the following happened before that:</div>
<div class=""><br class="">
</div>
<div class="">1) first, the network learned about hamsters (not many examples)</div>
<div class="">2) after that the network learned about red hats (outside the context of hamsters and without many examples) </div>
<div class="">3) finally the network learned about drawing (outside of the context of hats and hamsters, not many examples)</div>
<div class=""><br class="">
</div>
<div class="">After that, the network is asked to draw a hamster with a red hat. If it does it successfully, maybe we have started cracking the problem of understanding.</div>
<div class=""><br class="">
</div>
<div class="">Note also that this requires the network to learn sequentially without exhibiting catastrophic forgetting of the previous knowledge, which is possibly also a consequence of human learning by understanding.</div>
<div class=""><br class="">
</div>
<div class=""><br class="">
</div>
<div class="">Danko</div>
<div class=""><br class="">
</div>
<div class=""> </div>
<div class=""><br class="">
</div>
<div class=""><br class="">
</div>
<div class=""><br class="">
</div>
<div class=""><br class="">
</div>
<div class="">
<div class="">
<div dir="ltr" class="x_gmail_signature">
<div dir="ltr" class="">Dr. Danko Nikolić<br class="">
<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__www.danko-2Dnikolic.com&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=HwOLDw6UCRzU5-FPSceKjtpNm7C6sZQU5kuGAMVbPaI&e=" target="_blank" class="">www.danko-nikolic.com</a><br class="">
<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.linkedin.com_in_danko-2Dnikolic_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=b70c8lokmxM3Kz66OfMIM4pROgAhTJOAlp205vOmCQ8&e=" target="_blank" class="">https://www.linkedin.com/in/danko-nikolic/</a>
<div class="">--- A progress usually starts with an insight ---</div>
</div>
</div>
</div>
<br class="">
</div>
</div>
<div id="x_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2" class=""><br class="">
<table class="" style="border-top:1px solid #d3d4de">
<tbody class="">
<tr class="">
<td class="" style="width:55px; padding-top:13px"><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=" target="_blank" class=""><img alt="" width="46" height="29" class="" style="width:46px; height:29px" src="https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif"></a></td>
<td class="" style="width:470px; padding-top:12px; color:#41424e; font-size:13px; font-family:Arial,Helvetica,sans-serif; line-height:18px">
Virus-free. <a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=" target="_blank" class="" style="color:#4453ea">
www.avast.com</a> </td>
</tr>
</tbody>
</table>
<a href="" width="1" height="1" class=""></a></div>
<br class="">
<div class="x_gmail_quote">
<div dir="ltr" class="x_gmail_attr">On Thu, Feb 3, 2022 at 9:55 AM Asim Roy <<a href="mailto:ASIM.ROY@asu.edu" class="">ASIM.ROY@asu.edu</a>> wrote:<br class="">
</div>
<blockquote class="x_gmail_quote" style="margin:0px 0px 0px 0.8ex; border-left:1px solid rgb(204,204,204); padding-left:1ex">
<div lang="EN-US" class="" style="">
<div class="x_gmail-m_-714825385617039341WordSection1">
<p class="x_MsoNormal">Without getting into the specific dispute between Gary and Geoff, I think with approaches similar to GLOM, we are finally headed in the right direction. There’s plenty of neurophysiological evidence for single-cell abstractions and multisensory
 neurons in the brain, which one might claim correspond to symbols. And I think we can finally reconcile the decades old dispute between Symbolic AI and Connectionism.<u class=""></u><u class=""></u></p>
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
<p class="x_MsoNormal"><span class="" style="background:yellow">GARY: (Your GLOM, which as you know I praised publicly, is in many ways an effort to wind up with encodings that effectively serve as symbols in exactly that way, guaranteed to serve as consistent
 representations of specific concepts.)</span><u class=""></u><u class=""></u></p>
<p class="x_MsoNormal"><span class="" style="background:yellow">GARY: I have <i class="">
never</i> called for dismissal of neural networks, but rather for some hybrid between the two (as you yourself contemplated in 1991); the point of the 2001 book was to characterize exactly where multilayer perceptrons succeeded and broke down, and where symbols
 could complement them.</span><u class=""></u><u class=""></u></p>
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
<p class="x_MsoNormal">Asim Roy<u class=""></u><u class=""></u></p>
<p class="x_MsoNormal">Professor, Information Systems<u class=""></u><u class=""></u></p>
<p class="x_MsoNormal">Arizona State University<u class=""></u><u class=""></u></p>
<p class="x_MsoNormal"><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e=" target="_blank" class="">Lifeboat
 Foundation Bios: Professor Asim Roy</a><u class=""></u><u class=""></u></p>
<p class="x_MsoNormal"><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e=" target="_blank" class="">Asim
 Roy | iSearch (asu.edu)</a><u class=""></u><u class=""></u></p>
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
<div class="">
<div class="" style="border-right:none; border-bottom:none; border-left:none; border-top:1pt solid rgb(225,225,225); padding:3pt 0in 0in">
<p class="x_MsoNormal"><b class="">From:</b> Connectionists <<a href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu" target="_blank" class="">connectionists-bounces@mailman.srv.cs.cmu.edu</a>>
<b class="">On Behalf Of </b>Gary Marcus<br class="">
<b class="">Sent:</b> Wednesday, February 2, 2022 1:26 PM<br class="">
<b class="">To:</b> Geoffrey Hinton <<a href="mailto:geoffrey.hinton@gmail.com" target="_blank" class="">geoffrey.hinton@gmail.com</a>><br class="">
<b class="">Cc:</b> AIhub <<a href="mailto:aihuborg@gmail.com" target="_blank" class="">aihuborg@gmail.com</a>>;
<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" target="_blank" class="">connectionists@mailman.srv.cs.cmu.edu</a><br class="">
<b class="">Subject:</b> Re: Connectionists: Stephen Hanson in conversation with Geoff Hinton<u class=""></u><u class=""></u></p>
</div>
</div>
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
<div class="">
<div class="">
<p class="x_MsoNormal">Dear Geoff, and interested others,<u class=""></u><u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal">What, for example, would you make of a system that often drew the red-hatted hamster you requested, and perhaps a fifth of the time gave you utter nonsense?  Or say one that you trained to create birds but sometimes output stuff like
 this:<u class=""></u><u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal"><span id="x_cid:17ebf223d8f4cff311"><image001.png></span><u class=""></u><u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal">One could <u class=""></u><u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal">a. avert one’s eyes and deem the anomalous outputs irrelevant<u class=""></u><u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal">or<u class=""></u><u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal">b. wonder if it might be possible that sometimes the system gets the right answer for the wrong reasons (eg partial historical contingency), and wonder whether another approach might be indicated.<u class=""></u><u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal">Benchmarks are harder than they look; most of the field has come to recognize that. The Turing Test has turned out to be a lousy measure of intelligence, easily gamed. It has turned out empirically that the Winograd Schema Challenge did
 not measure common sense as well as Hector might have thought. (As it happens, I am a minor coauthor of a very recent review on this very topic: <a href="https://urldefense.com/v3/__https:/arxiv.org/abs/2201.02387__;!!IKRxdwAv5BmarQ!INA0AMmG3iD1B8MDtLfjWCwcBjxO-e-eM2Ci9KEO_XYOiIEgiywK-G_8j6L3bHA$" target="_blank" class="">https://arxiv.org/abs/2201.02387</a>)
 But its conquest in no way means machines now have common sense; many people from many different perspectives recognize that (including, e.g., Yann LeCun, who generally tends to be more aligned with you than with me).<u class=""></u><u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal">So: on the goalpost of the Winograd schema, I was wrong, and you can quote me; but what you said about me and machine translation remains your invention, and it is inexcusable that you simply ignored my 2019 clarification. On the essential
 goal of trying to reach meaning and understanding, I remain unmoved; the problem remains unsolved. <u class=""></u><u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal">All of the problems LLMs have with coherence, reliability, truthfulness, misinformation, etc stand witness to that fact. (Their persistent inability to filter out toxic and insulting remarks stems from the same.) I am hardly the only
 person in the field to see that progress on any given benchmark does not inherently mean that the deep underlying problems have solved. You, yourself, in fact, have occasionally made that point. <u class=""></u><u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal">With respect to embeddings: Embeddings are very good for natural language
<i class="">processing</i>; but NLP is not the same as NL<i class="">U</i> – when it comes to
<i class="">understanding</i>, their worth is still an open question. Perhaps they will turn out to be necessary; they clearly aren’t sufficient. In their extreme, they might even collapse into being symbols, in the sense of uniquely identifiable encodings,
 akin to the ASCII code, in which a specific set of numbers stands for a specific word or concept. (Wouldn’t that be ironic?)<u class=""></u><u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal">(Your GLOM, which as you know I praised publicly, is in many ways an effort to wind up with encodings that effectively serve as symbols in exactly that way, guaranteed to serve as consistent representations of specific concepts.)<u class=""></u><u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal">Notably absent from your email is any kind of apology for misrepresenting my position. It’s fine to say that “many people thirty years ago once thought X” and another to say “Gary Marcus said X in 2015”, when I didn’t. I have consistently
 felt throughout our interactions that you have mistaken me for Zenon Pylyshyn; indeed, you once (at NeurIPS 2014) apologized to me for having made that error. I am still not he. <u class=""></u><u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal">Which maybe connects to the last point; if you read my work, you would see thirty years of arguments
<i class="">for</i> neural networks, just not in the way that you want them to exist. I have ALWAYS argued that there is a role for them;  characterizing me as a person “strongly opposed to neural networks” misses the whole point of my 2001 book, which was
 subtitled “Integrating Connectionism and Cognitive Science.”<u class=""></u><u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal">In the last two decades or so you have insisted (for reasons you have never fully clarified, so far as I know) on abandoning symbol-manipulation, but the reverse is not the case: I have
<i class="">never</i> called for dismissal of neural networks, but rather for some hybrid between the two (as you yourself contemplated in 1991); the point of the 2001 book was to characterize exactly where multilayer perceptrons succeeded and broke down, and
 where symbols could complement them. It’s a rhetorical trick (which is what the previous thread was about) to pretend otherwise.<u class=""></u><u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal">Gary<u class=""></u><u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
</div>
<div class="">
<blockquote class="" style="margin-top:5pt; margin-bottom:5pt">
<p class="x_MsoNormal" style="margin-bottom:12pt">On Feb 2, 2022, at 11:22, Geoffrey Hinton <<a href="mailto:geoffrey.hinton@gmail.com" target="_blank" class="">geoffrey.hinton@gmail.com</a>> wrote:<u class=""></u><u class=""></u></p>
</blockquote>
</div>
<blockquote class="" style="margin-top:5pt; margin-bottom:5pt">
<div class="">
<p class="x_MsoNormal"><u class=""></u><u class=""></u></p>
<div class="">
<p class="x_MsoNormal">Embeddings are just vectors of soft feature detectors and they are very good for NLP.  The quote on my webpage from Gary's 2015 chapter implies the opposite.<u class=""></u><u class=""></u></p>
<div class="">
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal">A few decades ago, everyone I knew then would have agreed that the ability to translate a sentence into many different languages was strong evidence that you understood it.<u class=""></u><u class=""></u></p>
</div>
</div>
</div>
</blockquote>
<p class="x_MsoNormal"><br class="">
<br class="">
<u class=""></u><u class=""></u></p>
<blockquote class="" style="margin-top:5pt; margin-bottom:5pt">
<div class="">
<div class="">
<div class="">
<p class="x_MsoNormal">But once neural networks could do that, their critics moved the goalposts. An exception is Hector Levesque who defined the goalposts more sharply by saying that the ability to get pronoun references correct in Winograd sentences is a
 crucial test. Neural nets are improving at that but still have some way to go. Will Gary agree that when they can get pronoun references correct in Winograd sentences they really do understand? Or does he want to reserve the right to weasel out of that too?<u class=""></u><u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal">Some people, like Gary, appear to be strongly opposed to neural networks because they do not fit their preconceived notions of how the mind should work.<u class=""></u><u class=""></u></p>
</div>
<div class="">
<div class="">
<p class="x_MsoNormal">I believe that any reasonable person would admit that if you ask a neural net to draw a picture of a hamster wearing a red hat and it draws such a picture, it understood the request.<u class=""></u><u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal">Geoff<u class=""></u><u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
<div class="">
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
</div>
</div>
</div>
</div>
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
<div class="">
<div class="">
<p class="x_MsoNormal">On Wed, Feb 2, 2022 at 1:38 PM Gary Marcus <<a href="mailto:gary.marcus@nyu.edu" target="_blank" class="">gary.marcus@nyu.edu</a>> wrote:<u class=""></u><u class=""></u></p>
</div>
<blockquote class="" style="border-top:none; border-right:none; border-bottom:none; border-left:1pt solid rgb(204,204,204); padding:0in 0in 0in 6pt; margin-left:4.8pt; margin-right:0in">
<div class="">
<div class="">
<div class="">
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt; font-family:UICTFontTextStyleBody,serif">Dear AI Hub, cc: Steven Hanson and Geoffrey Hinton, and the larger neural network community,</span><span class="" style="font-size:13pt"><u class=""></u><u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt"><u class=""></u> <u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt; font-family:UICTFontTextStyleBody,serif">There has been a lot of recent discussion on this list about framing and scientific integrity. Often the first step in restructuring narratives is to bully
 and dehumanize critics. The second is to misrepresent their position. People in positions of power are sometimes tempted to do this.</span><span class="" style="font-size:13pt"><u class=""></u><u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt"><u class=""></u> <u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt; font-family:UICTFontTextStyleBody,serif">The Hinton-Hanson interview that you just published is a real-time example of just that. It opens with a needless and largely content-free personal attack
 on a single scholar (me), with the explicit intention of discrediting that person. Worse, the only substantive thing it says is false.</span><span class="" style="font-size:13pt"><u class=""></u><u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt"><u class=""></u> <u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt; font-family:UICTFontTextStyleBody,serif">Hinton says “In 2015 he [Marcus] made a prediction that computers wouldn’t be able to do machine translation.”</span><span class="" style="font-size:13pt"><u class=""></u><u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt"><u class=""></u> <u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt; font-family:UICTFontTextStyleBody,serif">I never said any such thing. </span><span class="" style="font-size:13pt"><u class=""></u><u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt"><u class=""></u> <u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt; font-family:UICTFontTextStyleBody,serif">What I predicted, rather, was that multilayer perceptrons, as they existed then, would not (on their own, absent other mechanisms) </span><i class=""><span class="" style="font-size:13pt; font-family:UICTFontTextStyleItalicBody,serif">understand</span></i><span class="" style="font-size:13pt; font-family:UICTFontTextStyleBody,serif"> language.
 Seven years later, they still haven’t, except in the most superficial way.   </span><span class="" style="font-size:13pt"><u class=""></u><u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt"><u class=""></u> <u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt; font-family:UICTFontTextStyleBody,serif">I made no comment whatsoever about machine translation, which I view as a separate problem, solvable to a certain degree by correspondance without semantics. </span><span class="" style="font-size:13pt"><u class=""></u><u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt"><u class=""></u> <u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt; font-family:UICTFontTextStyleBody,serif">I specifically tried to clarify Hinton’s confusion in 2019, but, disappointingly, he has continued to purvey misinformation despite that clarification. Here
 is what I wrote privately to him then, which should have put the matter to rest:</span><span class="" style="font-size:13pt"><u class=""></u><u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt"><u class=""></u> <u class=""></u></span></p>
</div>
<div class="" style="margin-left:27pt; font-stretch:normal">
<p class="x_MsoNormal"><span class="" style="font-size:13pt; font-family:UICTFontTextStyleBody,serif">You have taken a single out of context quote [from 2015] and misrepresented it. The quote, which you have prominently displayed at the bottom on your own web
 page, says:</span><span class="" style="font-size:13pt"><u class=""></u><u class=""></u></span></p>
</div>
<div class="" style="margin-left:27pt; font-stretch:normal; min-height:22.9px">
<p class="x_MsoNormal"><span class="" style="font-size:13pt"><u class=""></u> <u class=""></u></span></p>
</div>
<div class="" style="margin-left:0.75in; font-stretch:normal">
<p class="x_MsoNormal"><span class="" style="font-size:13pt; font-family:UICTFontTextStyleBody,serif">Hierarchies of features are less suited to challenges such as language, inference, and high-level planning. For example, as Noam Chomsky famously pointed out,
 language is filled with sentences you haven't seen before. Pure classifier systems don't know what to do with such sentences. The talent of feature detectors -- in  identifying which member of some category something belongs to -- doesn't translate into understanding
 novel  sentences, in which each sentence has its own unique meaning. </span><span class="" style="font-size:13pt"><u class=""></u><u class=""></u></span></p>
</div>
<div class="" style="margin-left:27pt; font-stretch:normal; min-height:22.9px">
<p class="x_MsoNormal"><span class="" style="font-size:13pt"><u class=""></u> <u class=""></u></span></p>
</div>
<div class="" style="margin-left:27pt; font-stretch:normal">
<p class="x_MsoNormal"><span class="" style="font-size:13pt; font-family:UICTFontTextStyleBody,serif">It does </span><i class=""><span class="" style="font-size:13pt; font-family:UICTFontTextStyleItalicBody,serif">not</span></i><span class="" style="font-size:13pt; font-family:UICTFontTextStyleBody,serif"> say
 "neural nets would not be able to deal with novel sentences"; it says that hierachies of features detectors (on their own, if you read the context of the essay) would have trouble </span><i class=""><span class="" style="font-size:13pt; font-family:UICTFontTextStyleItalicBody,serif">understanding </span></i><span class="" style="font-size:13pt; font-family:UICTFontTextStyleBody,serif">novel sentences.
  </span><span class="" style="font-size:13pt"><u class=""></u><u class=""></u></span></p>
</div>
<div class="" style="margin-left:27pt; font-stretch:normal; min-height:22.9px">
<p class="x_MsoNormal"><span class="" style="font-size:13pt"><u class=""></u> <u class=""></u></span></p>
</div>
<div class="" style="margin-left:27pt; font-stretch:normal">
<p class="x_MsoNormal"><span class="" style="font-size:13pt; font-family:UICTFontTextStyleBody,serif">Google Translate does yet not </span><i class=""><span class="" style="font-size:13pt; font-family:UICTFontTextStyleItalicBody,serif">understand</span></i><span class="" style="font-size:13pt; font-family:UICTFontTextStyleBody,serif"> the
 content of the sentences is translates. It cannot reliably answer questions about who did what to whom, or why, it cannot infer the order of the events in paragraphs, it can't determine the internal consistency of those events, and so forth.</span><span class="" style="font-size:13pt"><u class=""></u><u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt"><u class=""></u> <u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt; font-family:UICTFontTextStyleBody,serif">Since then, a number of scholars, such as the the computational linguist Emily Bender, have made similar points, and indeed current LLM difficulties with misinformation,
 incoherence and fabrication all follow from these concerns. Quoting from Bender’s prizewinning 2020 ACL article on the matter with Alexander Koller, <a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__aclanthology.org_2020.acl-2Dmain.463.pdf&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=K-Vl6vSvzuYtRMi-s4j7mzPkNRTb-I6Zmf7rbuKEBpk&e=" target="_blank" class="">https://aclanthology.org/2020.acl-main.463.pdf</a>,
 also emphasizing issues of understanding and meaning:</span><span class="" style="font-size:13pt"><u class=""></u><u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt"><u class=""></u> <u class=""></u></span></p>
</div>
<div class="" style="margin-left:27pt; font-stretch:normal">
<p class="x_MsoNormal"><i class=""><span class="" style="font-size:13pt; font-family:UICTFontTextStyleItalicBody,serif">The success of the large neural language models on many NLP tasks is exciting. However, we find that these successes sometimes lead to hype
 in which these models are being described as “understanding” language or capturing “meaning”. In this position paper, we argue that a system trained only on form has a priori no way to learn meaning. .. a clear understanding of the distinction between form
 and meaning will help guide the field towards better science around natural language understanding. </span></i><span class="" style="font-size:13pt"><u class=""></u><u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt"><u class=""></u> <u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt; font-family:UICTFontTextStyleBody,serif">Her later article with Gebru on language models “stochastic parrots” is in some ways an extension of this point; machine translation requires mimicry, true
 understanding (which is what I was discussing in 2015) requires something deeper than that. </span><span class="" style="font-size:13pt"><u class=""></u><u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt"><u class=""></u> <u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt; font-family:UICTFontTextStyleBody,serif">Hinton’s intellectual error here is in equating machine translation with the deeper comprehension that robust natural language understanding will require;
 as Bender and Koller observed, the two appear not to be the same. (There is a longer discussion of the relation between language understanding and machine translation, and why the latter has turned out to be more approachable than the former, in my 2019 book
 with Ernest Davis).</span><span class="" style="font-size:13pt"><u class=""></u><u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt"><u class=""></u> <u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt; font-family:UICTFontTextStyleBody,serif">More broadly, Hinton’s ongoing dismissiveness of research from perspectives other than his own (e.g. linguistics) have done the field a disservice. </span><span class="" style="font-size:13pt"><u class=""></u><u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt"><u class=""></u> <u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt; font-family:UICTFontTextStyleBody,serif">As Herb Simon once observed, science does not have to be zero-sum.</span><span class="" style="font-size:13pt"><u class=""></u><u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt"><u class=""></u> <u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt; font-family:UICTFontTextStyleBody,serif">Sincerely,</span><span class="" style="font-size:13pt"><u class=""></u><u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt; font-family:UICTFontTextStyleBody,serif">Gary Marcus</span><span class="" style="font-size:13pt"><u class=""></u><u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt; font-family:UICTFontTextStyleBody,serif">Professor Emeritus</span><span class="" style="font-size:13pt"><u class=""></u><u class=""></u></span></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-size:13pt; font-family:UICTFontTextStyleBody,serif">New York University</span><span class="" style="font-size:13pt"><u class=""></u><u class=""></u></span></p>
</div>
</div>
<div class="">
<p class="x_MsoNormal"><br class="">
<br class="">
<u class=""></u><u class=""></u></p>
<blockquote class="" style="margin-top:5pt; margin-bottom:5pt">
<p class="x_MsoNormal" style="margin-bottom:12pt">On Feb 2, 2022, at 06:12, AIhub <<a href="mailto:aihuborg@gmail.com" target="_blank" class="">aihuborg@gmail.com</a>> wrote:<u class=""></u><u class=""></u></p>
</blockquote>
</div>
<blockquote class="" style="margin-top:5pt; margin-bottom:5pt">
<div class="">
<p class="x_MsoNormal"><u class=""></u><u class=""></u></p>
<div class="">
<div class="">
<p class="x_MsoNormal">Stephen Hanson in conversation with Geoff Hinton<u class=""></u><u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal">In the latest episode of this video series for <a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__AIhub.org&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=eOtzMh8ILIH5EF7K20Ks4Fr27XfNV_F24bkj-SPk-2A&e=" target="_blank" class="">
AIhub.org</a>, Stephen Hanson talks to  Geoff Hinton about neural networks, backpropagation, overparameterization, digit recognition, voxel cells, syntax and semantics, Winograd sentences, and more.<u class=""></u><u class=""></u></p>
<div class="">
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal">You can watch the discussion, and read the transcript, here:<br clear="all" class="">
<u class=""></u><u class=""></u></p>
<div class="">
<p class="x_MsoNormal"><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__aihub.org_2022_02_02_what-2Dis-2Dai-2Dstephen-2Dhanson-2Din-2Dconversation-2Dwith-2Dgeoff-2Dhinton_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=OY_RYGrfxOqV7XeNJDHuzE--aEtmNRaEyQ0VJkqFCWw&e=" target="_blank" class="">https://aihub.org/2022/02/02/what-is-ai-stephen-hanson-in-conversation-with-geoff-hinton/</a><u class=""></u><u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal"><u class=""></u> <u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-family:Arial,sans-serif">About AIhub: </span><u class=""></u><u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-family:Arial,sans-serif">AIhub is a non-profit dedicated to connecting the AI community to the public by providing free, high-quality information through
<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__AIhub.org&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=eOtzMh8ILIH5EF7K20Ks4Fr27XfNV_F24bkj-SPk-2A&e=" target="_blank" class="">
AIhub.org</a> (</span><span class="" style="font-family:Arial,sans-serif"><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__aihub.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=IKFanqeMi73gOiS7yD-X_vRx_OqDAwv1Il5psrxnhIA&e=" target="_blank" class="">https://aihub.org/</a><span class="" style="">).
 We help researchers publish the latest AI news, summaries of their work, opinion pieces, tutorials and more.  We are supported by many leading scientific organizations in AI, namely
</span><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__aaai.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=wBvjOWTzEkbfFAGNj9wOaiJlXMODmHNcoWO5JYHugS0&e=" target="_blank" class=""><span class="" style="">AAAI</span></a><span class="" style="">,
</span><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__neurips.cc_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=3-lOHXyu8171pT_UE9hYWwK6ft4I-cvYkuX7shC00w0&e=" target="_blank" class=""><span class="" style="">NeurIPS</span></a><span class="" style="">,
</span><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__icml.cc_imls_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=JJyjwIpPy9gtKrZzBMbW3sRMh3P3Kcw-SvtxG35EiP0&e=" target="_blank" class=""><span class="" style="">ICML</span></a><span class="" style="">,
</span><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.journals.elsevier.com_artificial-2Dintelligence&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=eWrRCVWlcbySaH3XgacPpi0iR0-NDQYCLJ1x5yyMr8U&e=" target="_blank" class=""><span class="" style="">AIJ</span></a><span class="" style="">/</span><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.journals.elsevier.com_artificial-2Dintelligence&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=eWrRCVWlcbySaH3XgacPpi0iR0-NDQYCLJ1x5yyMr8U&e=" target="_blank" class=""><span class="" style="">IJCAI</span></a><span class="" style="">,
</span><a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__sigai.acm.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=7rC6MJFaMqOms10EYDQwfnmX-zuVNhu9fz8cwUwiLGQ&e=" target="_blank" class=""><span class="" style="">ACM
 SIGAI</span></a><span class="" style="">, EurAI/AICOMM, </span><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__claire-2Dai.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=66ZofDIhuDba6Fb0LhlMGD3XbBhU7ez7dc3HD5-pXec&e=" target="_blank" class=""><span class="" style="">CLAIRE</span></a><span class="" style="">
 and </span><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.robocup.org__&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=bBI6GRq--MHLpIIahwoVN8iyXXc7JAeH3kegNKcFJc0&e=" target="_blank" class=""><span class="" style="">RoboCup</span></a><span class="" style="">.</span></span><u class=""></u><u class=""></u></p>
</div>
<div class="">
<p class="x_MsoNormal"><span class="" style="font-family:Arial,sans-serif">Twitter: @aihuborg</span><u class=""></u><u class=""></u></p>
</div>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</blockquote>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
</div>
<div id="x_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2" class=""><br class="">
<table class="" style="border-top:1px solid #d3d4de">
<tbody class="">
<tr class="">
<td class="" style="width:55px; padding-top:13px"><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=" target="_blank" class=""><img alt="" width="46" height="29" class="" style="width:46px; height:29px" src="https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif"></a></td>
<td class="" style="width:470px; padding-top:12px; color:#41424e; font-size:13px; font-family:Arial,Helvetica,sans-serif; line-height:18px">
Virus-free. <a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=" target="_blank" class="" style="color:#4453ea">
www.avast.com</a> </td>
</tr>
</tbody>
</table>
<a href="" width="1" height="1" class=""></a></div>
</div>
</blockquote>
</div>
<br class="">
</div>
</div>
</div>
</body>
</html>