<div dir="ltr"><div dir="ltr"><div>This can still be improved on. Always happy to cite relevant predecessors.</div><div><br></div><div>Danko</div><div><br></div><br clear="all"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Dr. Danko Nikolić<br><a href="http://www.danko-nikolic.com" target="_blank">www.danko-nikolic.com</a><br><a href="https://www.linkedin.com/in/danko-nikolic/" target="_blank">https://www.linkedin.com/in/danko-nikolic/</a><div><span style="color:rgb(34,34,34)">-- I wonder, how is the brain able to generate insight? --</span><br></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Jul 14, 2022 at 2:03 PM Miroslav Karny <<a href="mailto:school@utia.cas.cz">school@utia.cas.cz</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><font face="Arial"><span style="font-size:10pt">Dear all,</span></font><div style="font-family:Arial;font-size:10pt"><br></div><div style="font-family:Arial;font-size:10pt">I am an external observer of your interesting discussions. It has been a bit surprising to me that the work of prof. <span style="font-size:10pt"> </span><span style="font-size:10pt">Nikolic</span><span style="font-size:10pt"> does not  </span><font face="Arial" style="font-size:13px"><span style="font-size:10pt">comment the work </span></font><font face="Arial" style="font-size:13px"><span style="font-size:13.3333px">@book{Fus:05,</span></font>  title={Cortex and mind: Unifying cognition}, author={J.M. Fuster}, year={2005}, publisher={Oxford university press}}, which<i> I feel </i>as the highly relevant predecessor of his work.</div><div style="font-family:Arial;font-size:10pt"><br></div><div style="font-family:Arial;font-size:10pt">                       Best regards  Miroslav Karny</div><div style="font-family:Arial;font-size:10pt">                                              <a href="https://www.utia.cas.cz/people/karny" target="_blank">https://www.utia.cas.cz/people/karny</a></div><div style="font-family:Arial;font-size:10pt"><br>Danko Nikolic  wrote:<br><blockquote style="margin-left:8px;padding-left:8px;border-left:1px solid lightgrey">
                Dear Gary and everyone,<br><br>I am continuing the discussion from where we left off a few months ago.<br>Back then, some of us agreed that the problem of understanding remains<br>unsolved.<br><br>As a reminder, the challenge for connectionism was to 1) learn with few<br>examples and 2) apply the knowledge to a broad set of situations.<br><br>I am happy to announce that I have now finished a draft of a paper in which<br>I propose how the brain is able to achieve that. The manuscript requires a<br>bit of patience for two reasons: one is that the reader may be exposed for<br>the first time to certain aspects of brain physiology. The second reason is<br>that it may take some effort to understand the counterintuitive<br>implications of the new ideas (this requires a different way of thinking<br>than what we are used to based on connectionism).<br><br>In short, I am suggesting that instead of the connectionist paradigm, we<br>adopt transient selection of subnetworks. The mechanisms that transiently<br>select brain subnetworks are distributed all over the nervous system and, I<br>argue, are our main machinery for thinking/cognition. The surprising<br>outcome is that neural activation, which was central in connectionism, now<br>plays only a supportive role, while the real 'workers' within the brain are<br>the mechanisms for transient selection of subnetworks.<br><br>I also explain how I think transient selection achieves learning with only<br>a few examples and how the learned knowledge is possible to apply to a<br>broad set of situations.<br><br>The manuscript is made available to everyone and can be downloaded here:<br><a href="https://bit.ly/3IFs8Ug" target="_blank">https://bit.ly/3IFs8Ug</a><br>(I apologize for the neuroscience lingo, which I tried to minimize.)<br><br>It will likely take a wide effort to implement these concepts as an AI<br>technology, provided my ideas do not have a major flaw in the first place.<br>Does anyone see a flaw?<br><br>Thanks.<br><br>Danko<br><br><br>Dr. Danko Nikolić<br><a>http://www.danko-nikolic.com</a><br><a href="https://www.linkedin.com/in/danko-nikolic/" target="_blank">https://www.linkedin.com/in/danko-nikolic/</a><br><br><br>On Thu, Feb 3, 2022 at 5:25 PM Gary Marcus <a href="mailto:%3Cgary.marcus@nyu.edu%3E" target="_blank"><gary.marcus@nyu.edu></a> wrote:<br><br>> Dear Danko,<br>><br>> Well said. I had a somewhat similar response to Jeff Dean’s 2021 TED talk,<br>> in which he said (paraphrasing from memory, because I don’t remember the<br>> precise words) that the famous 200 Quoc Le unsupervised model [<br>> <a href="https://static.googleusercontent.com/media/research.google.com/en//archive/unsupervised_icml2012.pdf" target="_blank">https://static.googleusercontent.com/media/research.google.com/en//archive/unsupervised_icml2012.pdf</a>]<br>> had learned the concept of a ca. In reality the model had clustered<br>> together some catlike images based on the image statistics that it had<br>> extracted, but it was a long way from a full, counterfactual-supporting<br>> concept of a cat, much as you describe below.<br>><br>> I fully agree with you that the reason for even having a semantics is as<br>> you put it, "to 1) learn with a few examples and 2) apply the knowledge to<br>> a broad set of situations.” GPT-3 sometimes gives the appearance of having<br>> done so, but it falls apart under close inspection, so the problem remains<br>> unsolved.<br>><br>> Gary<br>><br>> On Feb 3, 2022, at 3:19 AM, Danko Nikolic <a href="mailto:%3Cdanko.nikolic@gmail.com%3E" target="_blank"><danko.nikolic@gmail.com></a> wrote:<br>><br>> G. Hinton wrote: "I believe that any reasonable person would admit that if<br>> you ask a neural net to draw a picture of a hamster wearing a red hat and<br>> it draws such a picture, it understood the request."<br>><br>> I would like to suggest why drawing a hamster with a red hat does not<br>> necessarily imply understanding of the statement "hamster wearing a red<br>> hat".<br>> To understand that "hamster wearing a red hat" would mean inferring, in<br>> newly emerging situations of this hamster, all the real-life<br>> implications that the red hat brings to the little animal.<br>><br>> What would happen to the hat if the hamster rolls on its back? (Would the<br>> hat fall off?)<br>> What would happen to the red hat when the hamster enters its lair? (Would<br>> the hat fall off?)<br>> What would happen to that hamster when it goes foraging? (Would the red<br>> hat have an influence on finding food?)<br>> What would happen in a situation of being chased by a predator? (Would it<br>> be easier for predators to spot the hamster?)<br>><br>> ...and so on.<br>><br>> Countless many questions can be asked. One has understood "hamster wearing<br>> a red hat" only if one can answer reasonably well many of such real-life<br>> relevant questions. Similarly, a student has understood materias in a class<br>> only if they can apply the materials in real-life situations (e.g.,<br>> applying Pythagora's theorem). If a student gives a correct answer to a<br>> multiple choice question, we don't know whether the student understood the<br>> material or whether this was just rote learning (often, it is rote<br>> learning).<br>><br>> I also suggest that understanding also comes together with effective<br>> learning: We store new information in such a way that we can recall it<br>> later and use it effectively  i.e., make good inferences in newly emerging<br>> situations based on this knowledge.<br>><br>> In short: Understanding makes us humans able to 1) learn with a few<br>> examples and 2) apply the knowledge to a broad set of situations.<br>><br>> No neural network today has such capabilities and we don't know how to<br>> give them such capabilities. Neural networks need large amounts of<br>> training examples that cover a large variety of situations and then<br>> the networks can only deal with what the training examples have already<br>> covered. Neural networks cannot extrapolate in that 'understanding' sense.<br>><br>> I suggest that understanding truly extrapolates from a piece of knowledge.<br>> It is not about satisfying a task such as translation between languages or<br>> drawing hamsters with hats. It is how you got the capability to complete<br>> the task: Did you only have a few examples that covered something different<br>> but related and then you extrapolated from that knowledge? If yes, this is<br>> going in the direction of understanding. Have you seen countless examples<br>> and then interpolated among them? Then perhaps it is not understanding.<br>><br>> So, for the case of drawing a hamster wearing a red hat, understanding<br>> perhaps would have taken place if the following happened before that:<br>><br>> 1) first, the network learned about hamsters (not many examples)<br>> 2) after that the network learned about red hats (outside the context of<br>> hamsters and without many examples)<br>> 3) finally the network learned about drawing (outside of the context of<br>> hats and hamsters, not many examples)<br>><br>> After that, the network is asked to draw a hamster with a red hat. If it<br>> does it successfully, maybe we have started cracking the problem of<br>> understanding.<br>><br>> Note also that this requires the network to learn sequentially without<br>> exhibiting catastrophic forgetting of the previous knowledge, which is<br>> possibly also a consequence of human learning by understanding.<br>><br>><br>> Danko<br>><br>><br>><br>><br>><br>><br>> Dr. Danko Nikolić<br>> <a>http://www.danko-nikolic.com</a><br>> <<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__www.danko-2Dnikolic.com&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=HwOLDw6UCRzU5-FPSceKjtpNm7C6sZQU5kuGAMVbPaI&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=http-3A__www.danko-2Dnikolic.com&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=HwOLDw6UCRzU5-FPSceKjtpNm7C6sZQU5kuGAMVbPaI&e=</a>><br>> <a href="https://www.linkedin.com/in/danko-nikolic/" target="_blank">https://www.linkedin.com/in/danko-nikolic/</a><br>> <<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.linkedin.com_in_danko-2Dnikolic_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=b70c8lokmxM3Kz66OfMIM4pROgAhTJOAlp205vOmCQ8&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=https-3A__www.linkedin.com_in_danko-2Dnikolic_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=b70c8lokmxM3Kz66OfMIM4pROgAhTJOAlp205vOmCQ8&e=</a>><br>> --- A progress usually starts with an insight ---<br>><br>><br>><br>> <<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=</a>> Virus-free.<br>> <a>http://www.avast.com</a><br>> <<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=</a>><br>><br>> On Thu, Feb 3, 2022 at 9:55 AM Asim Roy <a href="mailto:%3CASIM.ROY@asu.edu%3E" target="_blank"><ASIM.ROY@asu.edu></a> wrote:<br>><br>>> Without getting into the specific dispute between Gary and Geoff, I think<br>>> with approaches similar to GLOM, we are finally headed in the right<br>>> direction. There’s plenty of neurophysiological evidence for single-cell<br>>> abstractions and multisensory neurons in the brain, which one might claim<br>>> correspond to symbols. And I think we can finally reconcile the decades old<br>>> dispute between Symbolic AI and Connectionism.<br>>><br>>><br>>><br>>> GARY: (Your GLOM, which as you know I praised publicly, is in many ways<br>>> an effort to wind up with encodings that effectively serve as symbols in<br>>> exactly that way, guaranteed to serve as consistent representations of<br>>> specific concepts.)<br>>><br>>> GARY: I have *never* called for dismissal of neural networks, but rather<br>>> for some hybrid between the two (as you yourself contemplated in 1991); the<br>>> point of the 2001 book was to characterize exactly where multilayer<br>>> perceptrons succeeded and broke down, and where symbols could complement<br>>> them.<br>>><br>>><br>>><br>>> Asim Roy<br>>><br>>> Professor, Information Systems<br>>><br>>> Arizona State University<br>>><br>>> Lifeboat Foundation Bios: Professor Asim Roy<br>>> <<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e=</a>><br>>><br>>> Asim Roy | iSearch (<a href="http://asu.edu" target="_blank">asu.edu</a>)<br>>> <<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e=</a>><br>>><br>>><br>>><br>>><br>>><br>>> *From:* Connectionists <a href="mailto:%3Cconnectionists-bounces@mailman.srv.cs.cmu.edu%3E" target="_blank"><connectionists-bounces@mailman.srv.cs.cmu.edu></a> *On<br>>> Behalf Of *Gary Marcus<br>>> *Sent:* Wednesday, February 2, 2022 1:26 PM<br>>> *To:* Geoffrey Hinton <a href="mailto:%3Cgeoffrey.hinton@gmail.com%3E" target="_blank"><geoffrey.hinton@gmail.com></a><br>>> *Cc:* AIhub <a href="mailto:%3Caihuborg@gmail.com%3E" target="_blank"><aihuborg@gmail.com></a>; <a href="mailto:connectionists@mailman.srv.cs.cmu.edu" target="_blank">connectionists@mailman.srv.cs.cmu.edu</a><br>>> *Subject:* Re: Connectionists: Stephen Hanson in conversation with Geoff<br>>> Hinton<br>>><br>>><br>>><br>>> Dear Geoff, and interested others,<br>>><br>>><br>>><br>>> What, for example, would you make of a system that often drew the<br>>> red-hatted hamster you requested, and perhaps a fifth of the time gave you<br>>> utter nonsense?  Or say one that you trained to create birds but sometimes<br>>> output stuff like this:<br>>><br>>><br>>><br>>> <image001.png><br>>><br>>><br>>><br>>> One could<br>>><br>>><br>>><br>>> a. avert one’s eyes and deem the anomalous outputs irrelevant<br>>><br>>> or<br>>><br>>> b. wonder if it might be possible that sometimes the system gets the<br>>> right answer for the wrong reasons (eg partial historical contingency), and<br>>> wonder whether another approach might be indicated.<br>>><br>>><br>>><br>>> Benchmarks are harder than they look; most of the field has come to<br>>> recognize that. The Turing Test has turned out to be a lousy measure of<br>>> intelligence, easily gamed. It has turned out empirically that the Winograd<br>>> Schema Challenge did not measure common sense as well as Hector might have<br>>> thought. (As it happens, I am a minor coauthor of a very recent review on<br>>> this very topic: <a href="https://arxiv.org/abs/2201.02387" target="_blank">https://arxiv.org/abs/2201.02387</a><br>>> <<a href="https://urldefense.com/v3/__https:/arxiv.org/abs/2201.02387__;!!IKRxdwAv5BmarQ!INA0AMmG3iD1B8MDtLfjWCwcBjxO-e-eM2Ci9KEO_XYOiIEgiywK-G_8j6L3bHA$" target="_blank">https://urldefense.com/v3/__https:/arxiv.org/abs/2201.02387__;!!IKRxdwAv5BmarQ!INA0AMmG3iD1B8MDtLfjWCwcBjxO-e-eM2Ci9KEO_XYOiIEgiywK-G_8j6L3bHA$</a>>)<br>>> But its conquest in no way means machines now have common sense; many<br>>> people from many different perspectives recognize that (including, e.g.,<br>>> Yann LeCun, who generally tends to be more aligned with you than with me).<br>>><br>>><br>>><br>>> So: on the goalpost of the Winograd schema, I was wrong, and you can<br>>> quote me; but what you said about me and machine translation remains your<br>>> invention, and it is inexcusable that you simply ignored my 2019<br>>> clarification. On the essential goal of trying to reach meaning and<br>>> understanding, I remain unmoved; the problem remains unsolved.<br>>><br>>><br>>><br>>> All of the problems LLMs have with coherence, reliability, truthfulness,<br>>> misinformation, etc stand witness to that fact. (Their persistent inability<br>>> to filter out toxic and insulting remarks stems from the same.) I am hardly<br>>> the only person in the field to see that progress on any given benchmark<br>>> does not inherently mean that the deep underlying problems have solved.<br>>> You, yourself, in fact, have occasionally made that point.<br>>><br>>><br>>><br>>> With respect to embeddings: Embeddings are very good for natural language<br>>> *processing*; but NLP is not the same as NL*U* – when it comes to<br>>> *understanding*, their worth is still an open question. Perhaps they<br>>> will turn out to be necessary; they clearly aren’t sufficient. In their<br>>> extreme, they might even collapse into being symbols, in the sense of<br>>> uniquely identifiable encodings, akin to the ASCII code, in which a<br>>> specific set of numbers stands for a specific word or concept. (Wouldn’t<br>>> that be ironic?)<br>>><br>>><br>>><br>>> (Your GLOM, which as you know I praised publicly, is in many ways an<br>>> effort to wind up with encodings that effectively serve as symbols in<br>>> exactly that way, guaranteed to serve as consistent representations of<br>>> specific concepts.)<br>>><br>>><br>>><br>>> Notably absent from your email is any kind of apology for misrepresenting<br>>> my position. It’s fine to say that “many people thirty years ago once<br>>> thought X” and another to say “Gary Marcus said X in 2015”, when I didn’t.<br>>> I have consistently felt throughout our interactions that you have mistaken<br>>> me for Zenon Pylyshyn; indeed, you once (at NeurIPS 2014) apologized to me<br>>> for having made that error. I am still not he.<br>>><br>>><br>>><br>>> Which maybe connects to the last point; if you read my work, you would<br>>> see thirty years of arguments *for* neural networks, just not in the way<br>>> that you want them to exist. I have ALWAYS argued that there is a role for<br>>> them;  characterizing me as a person “strongly opposed to neural networks”<br>>> misses the whole point of my 2001 book, which was subtitled “Integrating<br>>> Connectionism and Cognitive Science.”<br>>><br>>><br>>><br>>> In the last two decades or so you have insisted (for reasons you have<br>>> never fully clarified, so far as I know) on abandoning symbol-manipulation,<br>>> but the reverse is not the case: I have *never* called for dismissal of<br>>> neural networks, but rather for some hybrid between the two (as you<br>>> yourself contemplated in 1991); the point of the 2001 book was to<br>>> characterize exactly where multilayer perceptrons succeeded and broke down,<br>>> and where symbols could complement them. It’s a rhetorical trick (which is<br>>> what the previous thread was about) to pretend otherwise.<br>>><br>>><br>>><br>>> Gary<br>>><br>>><br>>><br>>><br>>><br>>> On Feb 2, 2022, at 11:22, Geoffrey Hinton <a href="mailto:%3Cgeoffrey.hinton@gmail.com%3E" target="_blank"><geoffrey.hinton@gmail.com></a><br>>> wrote:<br>>><br>>> <br>>><br>>> Embeddings are just vectors of soft feature detectors and they are very<br>>> good for NLP.  The quote on my webpage from Gary's 2015 chapter implies the<br>>> opposite.<br>>><br>>><br>>><br>>> A few decades ago, everyone I knew then would have agreed that the<br>>> ability to translate a sentence into many different languages was strong<br>>> evidence that you understood it.<br>>><br>>><br>>><br>>> But once neural networks could do that, their critics moved the<br>>> goalposts. An exception is Hector Levesque who defined the goalposts more<br>>> sharply by saying that the ability to get pronoun references correct in<br>>> Winograd sentences is a crucial test. Neural nets are improving at that but<br>>> still have some way to go. Will Gary agree that when they can get pronoun<br>>> references correct in Winograd sentences they really do understand? Or does<br>>> he want to reserve the right to weasel out of that too?<br>>><br>>><br>>><br>>> Some people, like Gary, appear to be strongly opposed to neural networks<br>>> because they do not fit their preconceived notions of how the mind should<br>>> work.<br>>><br>>> I believe that any reasonable person would admit that if you ask a neural<br>>> net to draw a picture of a hamster wearing a red hat and it draws such a<br>>> picture, it understood the request.<br>>><br>>><br>>><br>>> Geoff<br>>><br>>><br>>><br>>><br>>><br>>><br>>><br>>><br>>><br>>><br>>><br>>> On Wed, Feb 2, 2022 at 1:38 PM Gary Marcus <a href="mailto:%3Cgary.marcus@nyu.edu%3E" target="_blank"><gary.marcus@nyu.edu></a> wrote:<br>>><br>>> Dear AI Hub, cc: Steven Hanson and Geoffrey Hinton, and the larger neural<br>>> network community,<br>>><br>>><br>>><br>>> There has been a lot of recent discussion on this list about framing and<br>>> scientific integrity. Often the first step in restructuring narratives is<br>>> to bully and dehumanize critics. The second is to misrepresent their<br>>> position. People in positions of power are sometimes tempted to do this.<br>>><br>>><br>>><br>>> The Hinton-Hanson interview that you just published is a real-time<br>>> example of just that. It opens with a needless and largely content-free<br>>> personal attack on a single scholar (me), with the explicit intention of<br>>> discrediting that person. Worse, the only substantive thing it says is<br>>> false.<br>>><br>>><br>>><br>>> Hinton says “In 2015 he [Marcus] made a prediction that computers<br>>> wouldn’t be able to do machine translation.”<br>>><br>>><br>>><br>>> I never said any such thing.<br>>><br>>><br>>><br>>> What I predicted, rather, was that multilayer perceptrons, as they<br>>> existed then, would not (on their own, absent other mechanisms)<br>>> *understand* language. Seven years later, they still haven’t, except in<br>>> the most superficial way.<br>>><br>>><br>>><br>>> I made no comment whatsoever about machine translation, which I view as a<br>>> separate problem, solvable to a certain degree by correspondance without<br>>> semantics.<br>>><br>>><br>>><br>>> I specifically tried to clarify Hinton’s confusion in 2019, but,<br>>> disappointingly, he has continued to purvey misinformation despite that<br>>> clarification. Here is what I wrote privately to him then, which should<br>>> have put the matter to rest:<br>>><br>>><br>>><br>>> You have taken a single out of context quote [from 2015] and<br>>> misrepresented it. The quote, which you have prominently displayed at the<br>>> bottom on your own web page, says:<br>>><br>>><br>>><br>>> Hierarchies of features are less suited to challenges such as language,<br>>> inference, and high-level planning. For example, as Noam Chomsky famously<br>>> pointed out, language is filled with sentences you haven't seen<br>>> before. Pure classifier systems don't know what to do with such sentences.<br>>> The talent of feature detectors -- in  identifying which member of some<br>>> category something belongs to -- doesn't translate into understanding<br>>> novel  sentences, in which each sentence has its own unique meaning.<br>>><br>>><br>>><br>>> It does *not* say "neural nets would not be able to deal with novel<br>>> sentences"; it says that hierachies of features detectors (on their own, if<br>>> you read the context of the essay) would have trouble *understanding *novel sentences.<br>>><br>>><br>>><br>>><br>>> Google Translate does yet not *understand* the content of the sentences<br>>> is translates. It cannot reliably answer questions about who did what to<br>>> whom, or why, it cannot infer the order of the events in paragraphs, it<br>>> can't determine the internal consistency of those events, and so forth.<br>>><br>>><br>>><br>>> Since then, a number of scholars, such as the the computational linguist<br>>> Emily Bender, have made similar points, and indeed current LLM difficulties<br>>> with misinformation, incoherence and fabrication all follow from these<br>>> concerns. Quoting from Bender’s prizewinning 2020 ACL article on the matter<br>>> with Alexander Koller, <a href="https://aclanthology.org/2020.acl-main.463.pdf" target="_blank">https://aclanthology.org/2020.acl-main.463.pdf</a><br>>> <<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__aclanthology.org_2020.acl-2Dmain.463.pdf&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=K-Vl6vSvzuYtRMi-s4j7mzPkNRTb-I6Zmf7rbuKEBpk&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=https-3A__aclanthology.org_2020.acl-2Dmain.463.pdf&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=K-Vl6vSvzuYtRMi-s4j7mzPkNRTb-I6Zmf7rbuKEBpk&e=</a>>,<br>>> also emphasizing issues of understanding and meaning:<br>>><br>>><br>>><br>>> *The success of the large neural language models on many NLP tasks is<br>>> exciting. However, we find that these successes sometimes lead to hype in<br>>> which these models are being described as “understanding” language or<br>>> capturing “meaning”. In this position paper, we argue that a system trained<br>>> only on form has a priori no way to learn meaning. .. a clear understanding<br>>> of the distinction between form and meaning will help guide the field<br>>> towards better science around natural language understanding. *<br>>><br>>><br>>><br>>> Her later article with Gebru on language models “stochastic parrots” is<br>>> in some ways an extension of this point; machine translation requires<br>>> mimicry, true understanding (which is what I was discussing in 2015)<br>>> requires something deeper than that.<br>>><br>>><br>>><br>>> Hinton’s intellectual error here is in equating machine translation with<br>>> the deeper comprehension that robust natural language understanding will<br>>> require; as Bender and Koller observed, the two appear not to be the same.<br>>> (There is a longer discussion of the relation between language<br>>> understanding and machine translation, and why the latter has turned out to<br>>> be more approachable than the former, in my 2019 book with Ernest Davis).<br>>><br>>><br>>><br>>> More broadly, Hinton’s ongoing dismissiveness of research from<br>>> perspectives other than his own (e.g. linguistics) have done the field a<br>>> disservice.<br>>><br>>><br>>><br>>> As Herb Simon once observed, science does not have to be zero-sum.<br>>><br>>><br>>><br>>> Sincerely,<br>>><br>>> Gary Marcus<br>>><br>>> Professor Emeritus<br>>><br>>> New York University<br>>><br>>><br>>><br>>> On Feb 2, 2022, at 06:12, AIhub <a href="mailto:%3Caihuborg@gmail.com%3E" target="_blank"><aihuborg@gmail.com></a> wrote:<br>>><br>>> <br>>><br>>> Stephen Hanson in conversation with Geoff Hinton<br>>><br>>><br>>><br>>> In the latest episode of this video series for AIhub.org<br>>> <<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__AIhub.org&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=eOtzMh8ILIH5EF7K20Ks4Fr27XfNV_F24bkj-SPk-2A&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=http-3A__AIhub.org&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=eOtzMh8ILIH5EF7K20Ks4Fr27XfNV_F24bkj-SPk-2A&e=</a>>,<br>>> Stephen Hanson talks to  Geoff Hinton about neural networks,<br>>> backpropagation, overparameterization, digit recognition, voxel cells,<br>>> syntax and semantics, Winograd sentences, and more.<br>>><br>>><br>>><br>>> You can watch the discussion, and read the transcript, here:<br>>><br>>><br>>> <a href="https://aihub.org/2022/02/02/what-is-ai-stephen-hanson-in-conversation-with-geoff-hinton/" target="_blank">https://aihub.org/2022/02/02/what-is-ai-stephen-hanson-in-conversation-with-geoff-hinton/</a><br>>> <<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__aihub.org_2022_02_02_what-2Dis-2Dai-2Dstephen-2Dhanson-2Din-2Dconversation-2Dwith-2Dgeoff-2Dhinton_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=OY_RYGrfxOqV7XeNJDHuzE--aEtmNRaEyQ0VJkqFCWw&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=https-3A__aihub.org_2022_02_02_what-2Dis-2Dai-2Dstephen-2Dhanson-2Din-2Dconversation-2Dwith-2Dgeoff-2Dhinton_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=OY_RYGrfxOqV7XeNJDHuzE--aEtmNRaEyQ0VJkqFCWw&e=</a>><br>>><br>>><br>>><br>>> About AIhub:<br>>><br>>> AIhub is a non-profit dedicated to connecting the AI community to the<br>>> public by providing free, high-quality information through AIhub.org<br>>> <<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__AIhub.org&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=eOtzMh8ILIH5EF7K20Ks4Fr27XfNV_F24bkj-SPk-2A&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=http-3A__AIhub.org&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=eOtzMh8ILIH5EF7K20Ks4Fr27XfNV_F24bkj-SPk-2A&e=</a>><br>>> (<a href="https://aihub.org/" target="_blank">https://aihub.org/</a><br>>> <<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__aihub.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=IKFanqeMi73gOiS7yD-X_vRx_OqDAwv1Il5psrxnhIA&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=https-3A__aihub.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=IKFanqeMi73gOiS7yD-X_vRx_OqDAwv1Il5psrxnhIA&e=</a>>).<br>>> We help researchers publish the latest AI news, summaries of their work,<br>>> opinion pieces, tutorials and more.  We are supported by many leading<br>>> scientific organizations in AI, namely AAAI<br>>> <<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__aaai.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=wBvjOWTzEkbfFAGNj9wOaiJlXMODmHNcoWO5JYHugS0&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=https-3A__aaai.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=wBvjOWTzEkbfFAGNj9wOaiJlXMODmHNcoWO5JYHugS0&e=</a>>,<br>>> NeurIPS<br>>> <<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__neurips.cc_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=3-lOHXyu8171pT_UE9hYWwK6ft4I-cvYkuX7shC00w0&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=https-3A__neurips.cc_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=3-lOHXyu8171pT_UE9hYWwK6ft4I-cvYkuX7shC00w0&e=</a>>,<br>>> ICML<br>>> <<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__icml.cc_imls_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=JJyjwIpPy9gtKrZzBMbW3sRMh3P3Kcw-SvtxG35EiP0&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=https-3A__icml.cc_imls_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=JJyjwIpPy9gtKrZzBMbW3sRMh3P3Kcw-SvtxG35EiP0&e=</a>>,<br>>> AIJ<br>>> <<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.journals.elsevier.com_artificial-2Dintelligence&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=eWrRCVWlcbySaH3XgacPpi0iR0-NDQYCLJ1x5yyMr8U&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=https-3A__www.journals.elsevier.com_artificial-2Dintelligence&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=eWrRCVWlcbySaH3XgacPpi0iR0-NDQYCLJ1x5yyMr8U&e=</a>><br>>> /IJCAI<br>>> <<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.journals.elsevier.com_artificial-2Dintelligence&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=eWrRCVWlcbySaH3XgacPpi0iR0-NDQYCLJ1x5yyMr8U&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=https-3A__www.journals.elsevier.com_artificial-2Dintelligence&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=eWrRCVWlcbySaH3XgacPpi0iR0-NDQYCLJ1x5yyMr8U&e=</a>>,<br>>> ACM SIGAI<br>>> <<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__sigai.acm.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=7rC6MJFaMqOms10EYDQwfnmX-zuVNhu9fz8cwUwiLGQ&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=http-3A__sigai.acm.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=7rC6MJFaMqOms10EYDQwfnmX-zuVNhu9fz8cwUwiLGQ&e=</a>>,<br>>> EurAI/AICOMM, CLAIRE<br>>> <<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__claire-2Dai.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=66ZofDIhuDba6Fb0LhlMGD3XbBhU7ez7dc3HD5-pXec&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=https-3A__claire-2Dai.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=66ZofDIhuDba6Fb0LhlMGD3XbBhU7ez7dc3HD5-pXec&e=</a>><br>>> and RoboCup<br>>> <<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.robocup.org__&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=bBI6GRq--MHLpIIahwoVN8iyXXc7JAeH3kegNKcFJc0&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=https-3A__www.robocup.org__&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=bBI6GRq--MHLpIIahwoVN8iyXXc7JAeH3kegNKcFJc0&e=</a>><br>>> .<br>>><br>>> Twitter: @aihuborg<br>>><br>>><br>><br>> <<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=</a>> Virus-free.<br>> <a>http://www.avast.com</a><br>> <<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=" target="_blank">https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=</a>><br>><br>><br>><br></blockquote>
                </div></div></blockquote></div></div>