<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
</head>
<body dir="ltr">
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
Dear Tsvi,</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
<br>
</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
If this chat continues, please do write to me one-on-one.</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
<br>
</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
It is true that some conferences are run by self-interested cliques who do not run an open meeting, despite protestations to the contrary.</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
<br>
</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
Other, more open, conferences do not always have enough reviewers to expertly review articles on all the topics that are covered by our interdisciplinary field.</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
<br>
</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
My advice if this happens is simply to find another conference where your work might be better appreciated.</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
<br>
</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
That such conferences exist is another advantage of working in such an interdisciplinary field.</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
<br>
</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
All of us have had disappointing experiences. </div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
<br>
</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
My first experience was to send a series of articles to a single journal about discoveries that I had made over a period of many years. I was just starting out and naive enough to send them all to the same journal. </div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
<br>
</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
As a result, they were all rejected without even being reviewed. The editor-in-chief, who years later became a friend, told me that they simply didn't know how to handle so many articles at once.</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
<br>
</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
Sometime later, I submitted two articles close in time to two different journals. I tried to anticipate which one might more likely reject the article that I sent to them. </div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
<br>
</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
I was very proud of one article. It was, at least to my mind, quite a deep result. The other article was reasonably good craft, but more a continuation of earlier work than a breakthrough.</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
<br>
</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
Perhaps not surprisingly, the deep article got rejected while the less important article was accepted.</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
<br>
</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
I submitted the rejected article almost immediately to another journal and it was eventually published in a good place. </div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
<br>
</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
The moral of the story is that, if you believe in your work, and the criticisms of it are not valid, do not give up.</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
<br>
</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
If, however, the criticisms are valid, one must not let your love of a result blind you to its weaknesses. </div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
<br>
</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
I came to believe that all criticisms by reviewers are valuable and should be taken into account in your revision.</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
<br>
</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
Even if a reviewer's criticisms are, to your mind, wrong-headed, they represent the viewpoint of a more-than-usually-qualified reader who has given you the privilege of taking enough time to read your article.</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
<br>
</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
If you want to reach the maximum number of readers, then you should revise your article accordingly.</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
<br>
</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
<span style="color: rgb(0, 0, 0); font-family: Arial, Helvetica, sans-serif; font-size: 18pt;">Good luck!</span><br>
</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
<br>
</div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
Steve</div>
<div>
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 18pt; color: rgb(0, 0, 0);" class="elementToProof">
<br>
</div>
</div>
<hr tabindex="-1" style="display:inline-block; width:98%">
<div id="divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" color="#000000" style="font-size:11pt"><b>From:</b> Tsvi Achler <achler@gmail.com><br>
<b>Sent:</b> Sunday, July 17, 2022 10:55 AM<br>
<b>To:</b> Grossberg, Stephen <steve@bu.edu><br>
<b>Cc:</b> Asim Roy <ASIM.ROY@asu.edu>; Danko Nikolic <danko.nikolic@gmail.com>; AIhub <aihuborg@gmail.com>; connectionists@mailman.srv.cs.cmu.edu <connectionists@mailman.srv.cs.cmu.edu><br>
<b>Subject:</b> Re: Connectionists: Neural architectures that embody biological intelligence</font>
<div> </div>
</div>
<div>
<div dir="ltr">
<div><br>
</div>
<div>Dear Steve,</div>
<div><br>
</div>
<div>What motivated me to write was your response a couple of messages ago to someone who is not established in their field describing their model.</div>
<div><br>
</div>
<div>Studies on academics show that researchers who are not established but do original work do not get published and cited as much.  Please see article:
<a href="http://www.nber.org/papers/w22180" data-auth="NotApplicable">www.nber.org/papers/w22180</a></div>
<div><br>
</div>
<div>Moreover established researchers tend to push their theories and increments of their theories so strongly that it significantly affects progress in the field.  Please see article:
<a href="http://www.nber.org/papers/w21788" data-auth="NotApplicable">www.nber.org/papers/w21788</a></div>
<div><br>
</div>
<div>Since you mention it, the personal instance I am referring to is a conference where I got the following review (and I am paraphrasing):</div>
<div>I<i> dont really understand this model but it must be ART, and if it is ART this is wrong and that is wrong so I recommend rejecting it.</i>  And in a box for reviewer certainty the review was listed as
<i>100% certain</i>.</div>
<div><br>
</div>
<div>The consequence was that I had only 3 minutes to talk about a model that is counterintuitive given today's notions, as someone who exhausted all their meager resources just to get there.  This summarizes my experiences in academia trying to put forward
 something new.</div>
<div><br>
</div>
<div>I am happy to pull up the specific text but that distracts from the point. The point is that at least this review was transparent.  </div>
<div>
<div>Most reviewers are not likely to be as transparent when something is counterintuitive, not normative and thus harder to understand.</div>
<div><br>
</div>
</div>
<div>What I am saying is that given this knowledge about academia, established researchers should be very careful as they can easily stifle new research without realizing it.</div>
<div><br>
</div>
<div>
<div>If established academics push too strongly then academica can become a political club, not a place for progress.</div>
<div>I believe this is a major contributor to why so little progress has been made in the field of understanding the brain through connectionist models.</div>
<div></div>
</div>
<div><br>
</div>
<div>Sincerely,</div>
<div>-Tsvi</div>
<div><br>
</div>
<div><br>
<div class="x_gmail_quote">
<div dir="ltr" class="x_gmail_attr">On Sat, Jul 16, 2022 at 8:45 AM Grossberg, Stephen <<a href="mailto:steve@bu.edu" data-auth="NotApplicable">steve@bu.edu</a>> wrote:<br>
</div>
<blockquote class="x_gmail_quote" style="margin:0px 0px 0px 0.8ex; border-left:1px solid rgb(204,204,204); padding-left:1ex">
<div dir="ltr">
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
Dear Tsvi,</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
I have no idea why you are writing to me.</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
I would prefer that you did not engage the entire connectionists mailing list. However, since you did, I need to include everyone in my reply.</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
For starters, I have not been an editor of any journal since 2010.</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
When I was Editor-in-Chief of <i>Neural Networks</i> before that, and a new article was submitted, I assigned it to one of over 70 action editors who was a specialist in the topic of the article.</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
That action editor then took full responsibility for getting three reviews of the article. If any reviewer disagreed with other reviewers for a potentially serious reason, then yet another reviewer was typically sought by the action editor in order to try to
 resolve the difference.</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
Almost always, the reviewers agreed about publication recommendations, so this was not needed.</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
I always followed the recommendations of action editors to publish or not, based upon the above process.</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
I only entered any decision if the action editor solicited my help for a problem for which he/she needed advice. This hardly ever happened.</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
Best,</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
Steve</div>
<div>
<div id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132Signature">
<div>
<div id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132divtagdefaultwrapper" dir="ltr" style="font-size:18pt; color:rgb(0,0,0); font-family:Arial,Helvetica,sans-serif">
<p style="margin-top:0px; margin-bottom:0px"></p>
</div>
</div>
</div>
</div>
<div id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132signature_bookmark">
</div>
<div id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132appendonsend">
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<br>
</div>
<hr style="display:inline-block; width:98%">
<div id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132divRplyFwdMsg" dir="ltr">
<font face="Calibri, sans-serif" color="#000000" style="font-size:11pt"><b>From:</b> Tsvi Achler <<a href="mailto:achler@gmail.com" data-auth="NotApplicable">achler@gmail.com</a>><br>
<b>Sent:</b> Saturday, July 16, 2022 10:51 AM<br>
<b>To:</b> Grossberg, Stephen <<a href="mailto:steve@bu.edu" data-auth="NotApplicable">steve@bu.edu</a>><br>
<b>Cc:</b> Asim Roy <<a href="mailto:ASIM.ROY@asu.edu" data-auth="NotApplicable">ASIM.ROY@asu.edu</a>>; Danko Nikolic <<a href="mailto:danko.nikolic@gmail.com" data-auth="NotApplicable">danko.nikolic@gmail.com</a>>; AIhub <<a href="mailto:aihuborg@gmail.com" data-auth="NotApplicable">aihuborg@gmail.com</a>>;
<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" data-auth="NotApplicable">
connectionists@mailman.srv.cs.cmu.edu</a> <<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" data-auth="NotApplicable">connectionists@mailman.srv.cs.cmu.edu</a>><br>
<b>Subject:</b> Re: Connectionists: Neural architectures that embody biological intelligence</font>
<div> </div>
</div>
<div>
<div dir="ltr">
<div dir="ltr" style="color:rgb(0,0,0)">
<div>Dear Stephen,
<div>I have a connectionist model where feedback takes on a much greater role than the Resonance and other theories.</div>
<div>I also have a richer background than many researchers in this field.  I have a degree in electrical engineering & computer science, I did my PhD work doing neurophysiology recording neurons and working in a cognitive lab recording differential human reaction
 times to visual stimulation. I also got an MD focusing on neurology and patients.</div>
<span style="color:rgb(80,0,80)">
<div style="color:rgb(0,0,0)"><br>
</div>
<div style="color:rgb(0,0,0)">Consistently throughout the years established academics and their associates have blocked this theory's publication and funding in favor of their own.</div>
<div style="color:rgb(0,0,0)">Since academia is mostly political, this is a big deal. Moreover, it bothers me seeing this done to others.</div>
<div style="color:rgb(0,0,0)"><br>
</div>
</span>
<div>Unfortunately you are by far NOT the worst at doing so, you are just the most transparent about it.</div>
<div><br>
</div>
<span style="color:rgb(80,0,80)">
<div style="color:rgb(0,0,0)">I came to the conclusion that academia is not a place to innovate, especially if you come from a multidisciplinary background because (analogous to some of the models) the politics multiply exponentially.</div>
<div style="color:rgb(0,0,0)"><br>
</div>
<div style="color:rgb(0,0,0)">Although your work was innovative in the grand scheme of things, what you and other well established academics are doing is not ok.</div>
</span><span style="color:rgb(80,0,80)">
<div style="color:rgb(0,0,0)">Sincerely,</div>
<div style="color:rgb(0,0,0)">-Tsvi</div>
<div style="color:rgb(0,0,0)">
<div><br>
</div>
</div>
</span></div>
</div>
</div>
<br>
<div>
<div dir="ltr">On Sat, Jul 16, 2022 at 12:04 AM Grossberg, Stephen <<a href="mailto:steve@bu.edu" data-auth="NotApplicable">steve@bu.edu</a>> wrote:<br>
</div>
<blockquote style="margin:0px 0px 0px 0.8ex; border-left:1px solid rgb(204,204,204); padding-left:1ex">
<div dir="ltr">
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
Dear Asim and Danko,</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
A lot of your concerns about scaling do not apply to the kinds of biological neural networks that my colleagues and I have developed over the years. You can find a self-contained summary of many of them in my Magnum Opus: </div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<a href="https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552" data-auth="NotApplicable" id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132x_gmail-m_1424330336660607890gmail-m_-1059863988750999534LPNoLPOWALinkPreview">https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552</a><br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
As Asim notes below, it is indeed the case that ART can often make good predictions based on small amounts of learned data. This applies as well to large-scale applications naturalistic data.</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
Gail Carpenter and her colleagues have, for example, shown how this works in learning complicated maps of multiple vegetation classes during remote sensing; e.g.</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
Carpenter, G.A., Gopal, S., Macomber, S., Martens, S., & Woodcock, C.E. (1999). A neural network method for mixture estimation for vegetation mapping. Remote Sensing of Environment, 70(2), 138-152.<br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<a href="http://techlab.bu.edu/members/gail/articles/127_Mixtures_RSE_1999_.pdf" data-auth="NotApplicable" id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132x_gmail-m_1424330336660607890gmail-m_-1059863988750999534LPlnkOWALinkPreview">http://techlab.bu.edu/members/gail/articles/127_Mixtures_RSE_1999_.pdf</a><br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
or in learning medical database predictions in response to incomplete, probabilistic, and even incorrect data. </div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
In this regard, Gail et al. have also shown how an ART system can incrementally learn a cognitive hierarchy of rules whereby to understand such data; i.e., converts information into knowledge; e.g.,</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
Carpenter, G.A., & Ravindran, A. (2008). Unifying multiple knowledge domains using the ARTMAP information fusion system. Proceedings of the 11th International Conference on Information Fusion, Cologne, Germany, June 30 - July 3, 2008.</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<a href="http://techlab.bu.edu/members/gail/articles/155_Fusion2008_CarpenterRavindran.pdf" data-auth="NotApplicable" id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132x_gmail-m_1424330336660607890gmail-m_-1059863988750999534LPlnk571453">http://techlab.bu.edu/members/gail/articles/155_Fusion2008_CarpenterRavindran.pdf</a><br>
</div>
<div></div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
Carpenter, G.A. & Markuzon, N. (1998). ARTMAP‑IC and medical diagnosis: Instance counting and inconsistent cases. Neural Networks, 11(2), 323-336.
<br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<a href="http://techlab.bu.edu/members/gail/articles/117_ARTMAP-IC_1998_.pdf" data-auth="NotApplicable" id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132x_gmail-m_1424330336660607890gmail-m_-1059863988750999534LPlnkOWALinkPreview">http://techlab.bu.edu/members/gail/articles/117_ARTMAP-IC_1998_.pdf</a><br>
</div>
<br>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
My own work is filled with models that incrementally learn to carry out goal-oriented tasks without regard to scaling concerns. This work develops neural architectures that involve the coordinated actions of many brain regions, not just learned classification.
 These architectures are supported by unified and principled explanations of lots of psychological and neurobiological data; e.g.,</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
Chang, H.-C., Grossberg, S., and Cao, Y. (2014) Where’s Waldo? How perceptual cognitive, and emotional brain processes cooperate during learning to categorize and find desired objects in a cluttered scene. Frontiers in Integrative Neuroscience, doi: 10.3389/fnint.2014.0043,
<br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<a href="https://www.frontiersin.org/articles/10.3389/fnint.2014.00043/full" data-auth="NotApplicable" id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132x_gmail-m_1424330336660607890gmail-m_-1059863988750999534LPNoLPOWALinkPreview_2">https://www.frontiersin.org/articles/10.3389/fnint.2014.00043/full</a><br>
</div>
<div></div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
Grossberg, S., and Vladusich, T. (2010). How do children learn to follow gaze, share joint attention, imitate their teachers, and use tools during social interactions? Neural Networks, 23, 940-965.<br>
</div>
<div>
<div id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132x_gmail-m_1424330336660607890gmail-m_-1059863988750999534Signature">
<div>
<div dir="ltr" style="font-size:18pt; color:rgb(0,0,0); font-family:Arial,Helvetica,sans-serif">
<p style="margin-top:0px; margin-bottom:0px"></p>
</div>
</div>
</div>
</div>
<div id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132x_gmail-m_1424330336660607890gmail-m_-1059863988750999534signature_bookmark">
</div>
<div id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132x_gmail-m_1424330336660607890gmail-m_-1059863988750999534appendonsend">
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<a href="https://sites.bu.edu/steveg/files/2016/06/GrossbergVladusichNN2010.pdf" data-auth="NotApplicable" id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132x_gmail-m_1424330336660607890gmail-m_-1059863988750999534LPNoLPOWALinkPreview_1">https://sites.bu.edu/steveg/files/2016/06/GrossbergVladusichNN2010.pdf</a><br>
</div>
<div></div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
See Figure 1 in the following article to get a sense of how many brain processes other than classification are needed to realize true biological intelligence:</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
Grossberg, S. (2018). Desirability, availability, credit assignment, category learning, and attention: Cognitive-emotional and working memory dynamics of orbitofrontal, ventrolateral, and dorsolateral prefrontal cortices. Brain and Neuroscience Advances, May
 8, 2018.</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<a href="https://journals.sagepub.com/doi/full/10.1177/2398212818772179" data-auth="NotApplicable" id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132x_gmail-m_1424330336660607890gmail-m_-1059863988750999534LPlnkOWALinkPreview_3">https://journals.sagepub.com/doi/full/10.1177/2398212818772179</a><br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
Best,</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
Steve</div>
<hr style="display:inline-block; width:98%; font-family:Arial,Helvetica,sans-serif; font-size:18pt; color:rgb(0,0,0)">
<div id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132x_gmail-m_1424330336660607890gmail-m_-1059863988750999534divRplyFwdMsg" dir="ltr">
<font face="Calibri, sans-serif" color="#000000" style="font-size:11pt"><b>From:</b> Asim Roy <<a href="mailto:ASIM.ROY@asu.edu" data-auth="NotApplicable">ASIM.ROY@asu.edu</a>><br>
<b>Sent:</b> Friday, July 15, 2022 9:35 AM<br>
<b>To:</b> Danko Nikolic <<a href="mailto:danko.nikolic@gmail.com" data-auth="NotApplicable">danko.nikolic@gmail.com</a>><br>
<b>Cc:</b> Grossberg, Stephen <<a href="mailto:steve@bu.edu" data-auth="NotApplicable">steve@bu.edu</a>>; Gary Marcus <<a href="mailto:gary.marcus@nyu.edu" data-auth="NotApplicable">gary.marcus@nyu.edu</a>>; AIhub <<a href="mailto:aihuborg@gmail.com" data-auth="NotApplicable">aihuborg@gmail.com</a>>;
<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" data-auth="NotApplicable">
connectionists@mailman.srv.cs.cmu.edu</a> <<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" data-auth="NotApplicable">connectionists@mailman.srv.cs.cmu.edu</a>>; '<a href="mailto:maxversace@gmail.com" data-auth="NotApplicable">maxversace@gmail.com</a>'
 <<a href="mailto:maxversace@gmail.com" data-auth="NotApplicable">maxversace@gmail.com</a>><br>
<b>Subject:</b> RE: Connectionists: Stephen Hanson in conversation with Geoff Hinton</font>
<div> </div>
</div>
<div lang="EN-US">
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Dear Danko,</p>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
<ol type="1" style="margin-bottom:0in; margin-top:0in">
<li style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">I am not sure if I myself know all the uses of a knife, leave aside countless ones. Given a particular situation, I might simulate in my mind about the potential usage, but I doubt our minds
 explore all the countless situations of usage of an object as soon as it learns about it.
</li><li style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">I am not sure if a 2 or 3 year old child, after having “learnt” about a knife, knows very many uses of it. I doubt the kid is awake all night and day simulating in the brain how and where
 to use such a knife.</li><li style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">“Understanding” is a loaded term. I think it needs a definition.</li><li style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">I am copying Max Versace, a student of Steve Grossberg. His company markets a software that can learn quickly from a few examples. Not exactly one-shot learning, it needs a few shots. I
 believe it’s a variation of ART. But Max can clarify the details. And Tsvi is doing similar work. So, what you are asking for may already exist. So linear scaling may be the worst case scenario.</li></ol>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Best,</p>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Asim Roy</p>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Professor, Information Systems</p>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Arizona State University</p>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e=" data-auth="NotApplicable">Lifeboat
 Foundation Bios: Professor Asim Roy</a></p>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e=" data-auth="NotApplicable">Asim
 Roy | iSearch (asu.edu)</a></p>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
<div style="border-right:none; border-bottom:none; border-left:none; border-top:1pt solid rgb(225,225,225); padding:3pt 0in 0in">
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><b>From:</b> Danko Nikolic <<a href="mailto:danko.nikolic@gmail.com" data-auth="NotApplicable">danko.nikolic@gmail.com</a>>
<br>
<b>Sent:</b> Friday, July 15, 2022 12:19 AM<br>
<b>To:</b> Asim Roy <<a href="mailto:ASIM.ROY@asu.edu" data-auth="NotApplicable">ASIM.ROY@asu.edu</a>><br>
<b>Cc:</b> Grossberg, Stephen <<a href="mailto:steve@bu.edu" data-auth="NotApplicable">steve@bu.edu</a>>; Gary Marcus <<a href="mailto:gary.marcus@nyu.edu" data-auth="NotApplicable">gary.marcus@nyu.edu</a>>; AIhub <<a href="mailto:aihuborg@gmail.com" data-auth="NotApplicable">aihuborg@gmail.com</a>>;
<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" data-auth="NotApplicable">
connectionists@mailman.srv.cs.cmu.edu</a><br>
<b>Subject:</b> Re: Connectionists: Stephen Hanson in conversation with Geoff Hinton</p>
</div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
<div>
<div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Dear Asim,</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">I agree about the potential for linear scaling of ART and.other connectionist systems. However, there are two problems. </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">The problem number one kills it already and this is that the real brain scales a lot better than linearly: For each new object learned, we are able to resolve countless many new situations
 in which this object takes part (e.g., finding various uses for a knife, many of which may be new, ad hoc -- this is a great ability of biological minds often referred to as 'understanding'). Hence, simple linear scaling  by adding more neurons for additional objects
 is not good enough to match biological intelligence.</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">The second problem becomes an overkill, and this is that linear scaling in connectionist systems works only in theory, under idealized conditions. In real life, say if working with ImageNet,
 the scaling turns into a power-law with an exponent much larger than one: We need something like 500x more resources just to double the number of objects. Hence, in practice, the demands for resources explode if you want to add more categories whilst not losing
 the accuracy.  </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">To summarize, there is no linear scaling in practice nor would linear scaling suffice, even if we found one. </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">This should be a strong enough argument to search for another paradigm, something that scales better than connectionism.</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">I discuss both problems in the new manuscript, and even track a bit deeper the problem of why connectionism lacks linear scaling in practice (I provide some revealing computations in the
 Supplementary Materials (with access to the code), although much more work needs to be done).</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Danko</p>
</div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><br clear="all">
</p>
<div>
<div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Dr. Danko Nikolić<br>
<a href="https://urldefense.com/v3/__http:/www.danko-nikolic.com__;!!IKRxdwAv5BmarQ!dRAUJv4Z-MYBdeXPR2F6nWM_fPxoHF-3d3u6QNonedYrac67POEvWJxIOhXM-JsMWH8mTU6G5JdOT5UoyE_lBRw$" data-auth="NotApplicable">www.danko-nikolic.com</a><br>
<a href="https://urldefense.com/v3/__https:/www.linkedin.com/in/danko-nikolic/__;!!IKRxdwAv5BmarQ!dRAUJv4Z-MYBdeXPR2F6nWM_fPxoHF-3d3u6QNonedYrac67POEvWJxIOhXM-JsMWH8mTU6G5JdOT5UoJhzJWDU$" data-auth="NotApplicable">https://www.linkedin.com/in/danko-nikolic/</a></p>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="color:rgb(34,34,34)">-- I wonder, how is the brain able to generate insight? --</span></p>
</div>
</div>
</div>
</div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
<div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">On Thu, Jul 14, 2022 at 11:48 PM Asim Roy <<a href="mailto:ASIM.ROY@asu.edu" data-auth="NotApplicable">ASIM.ROY@asu.edu</a>> wrote:</p>
</div>
<blockquote style="border-top:none; border-right:none; border-bottom:none; border-left:1pt solid rgb(204,204,204); padding:0in 0in 0in 6pt; margin-left:4.8pt; margin-right:0in">
<div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">I think Steve Grossberg is generally correct. ART, RBF nets and similar flexible architectures can scale up almost linearly to problem size. My conjecture is that architectures that use distributed
 representation run into the scaling issue. On the other hand, however, distributed representation produces a more compact architecture compared to the shallow architectures of ART and RBFs. But, in terms of adding concepts, it’s easy to add a new object or
 concept to the shallow architectures. </p>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Perhaps someone can provide more insights on the architectural differences and the corresponding pros and cons of each.</p>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Best,</p>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Asim Roy</p>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Professor, Information Systems</p>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Arizona State University</p>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e=" data-auth="NotApplicable">Lifeboat
 Foundation Bios: Professor Asim Roy</a></p>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e=" data-auth="NotApplicable">Asim
 Roy | iSearch (asu.edu)</a></p>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
<div>
<div style="border-right:none; border-bottom:none; border-left:none; border-top:1pt solid rgb(225,225,225); padding:3pt 0in 0in">
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><b>From:</b> Connectionists <<a href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu" data-auth="NotApplicable">connectionists-bounces@mailman.srv.cs.cmu.edu</a>>
<b>On Behalf Of </b>Grossberg, Stephen<br>
<b>Sent:</b> Thursday, July 14, 2022 10:14 AM<br>
<b>To:</b> Danko Nikolic <<a href="mailto:danko.nikolic@gmail.com" data-auth="NotApplicable">danko.nikolic@gmail.com</a>><br>
<b>Cc:</b> AIhub <<a href="mailto:aihuborg@gmail.com" data-auth="NotApplicable">aihuborg@gmail.com</a>>;
<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" data-auth="NotApplicable">
connectionists@mailman.srv.cs.cmu.edu</a><br>
<b>Subject:</b> Re: Connectionists: Stephen Hanson in conversation with Geoff Hinton</p>
</div>
</div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">Dear Danko,</span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">I will respond to your comment below to the entire list. I recommend that future interactions be done between just
 the two of us.</span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">Everything that I write below is summarized in my new book:</span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">In brief, within Adaptive Resonance Theory, or ART, there IS no problem of scaling, in the sense that you describe
 it below, from learning (say) to correctly recognizing 100 objects to doing the same for 200. In fact, one of the properties for which I introduced ART in 1976 was to enable incremental learning in real time of arbitrary numbers of objects or events without
 experiencing catastrophic forgetting.</span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">I have called this a solution of the
<b>stability-plasticity dilemma</b>: How our brains, and models like ART, can rapidly learn (plasticity) arbitrary numbers of objects without experiencing catastrophic forgetting (stability).</span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">I also derived ART in 1980, in an article within Psychological Review, from a
<b>thought experiment</b> about how ANY system can AUTONOMOUSLY correct predictive errors in a changing world that is filled with unexpected events.</span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">This thought is experiment was derived from a few facts that are familiar to us all. They are familiar because they
 represent ubiquitous environmental pressures that we all experience. The thought experiment clarifies why, when they act together during the evolutionary process, models like ART are the unique result.</span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">Moveover, at no point during the thought experiment are the words mind or brain mentioned. ART is thus a
<b>universal solution</b> of this learning, classification, and prediction problem.</span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
<div>
<div id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132x_gmail-m_1424330336660607890gmail-m_-1059863988750999534x_gmail-m_5352872017708287389Signature">
<div>
<div id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132x_gmail-m_1424330336660607890gmail-m_-1059863988750999534x_gmail-m_5352872017708287389divtagdefaultwrapper">
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">You write below about "connectionist systems". ART is a connectionist system.</span></p>
</div>
<div id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132x_gmail-m_1424330336660607890gmail-m_-1059863988750999534x_gmail-m_5352872017708287389divtagdefaultwrapper">
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
<div id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132x_gmail-m_1424330336660607890gmail-m_-1059863988750999534x_gmail-m_5352872017708287389divtagdefaultwrapper">
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">What do you mean by a "connectionist system"? What you write below about them is not true in general.</span></p>
</div>
<div id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132x_gmail-m_1424330336660607890gmail-m_-1059863988750999534x_gmail-m_5352872017708287389divtagdefaultwrapper">
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
<div id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132x_gmail-m_1424330336660607890gmail-m_-1059863988750999534x_gmail-m_5352872017708287389divtagdefaultwrapper">
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">Best,</span></p>
</div>
<div id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132x_gmail-m_1424330336660607890gmail-m_-1059863988750999534x_gmail-m_5352872017708287389divtagdefaultwrapper">
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
<div id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132x_gmail-m_1424330336660607890gmail-m_-1059863988750999534x_gmail-m_5352872017708287389divtagdefaultwrapper">
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">Steve</span></p>
</div>
</div>
</div>
</div>
<div align="center" style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif; text-align:center">
<hr size="1" width="98%" align="center">
</div>
<div id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132x_gmail-m_1424330336660607890gmail-m_-1059863988750999534x_gmail-m_5352872017708287389divRplyFwdMsg">
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><b><span style="color:black">From:</span></b><span style="color:black"> Danko Nikolic <<a href="mailto:danko.nikolic@gmail.com" data-auth="NotApplicable">danko.nikolic@gmail.com</a>><br>
<b>Sent:</b> Thursday, July 14, 2022 12:16 PM<br>
<b>To:</b> Grossberg, Stephen <<a href="mailto:steve@bu.edu" data-auth="NotApplicable">steve@bu.edu</a>><br>
<b>Cc:</b> Gary Marcus <<a href="mailto:gary.marcus@nyu.edu" data-auth="NotApplicable">gary.marcus@nyu.edu</a>>;
<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" data-auth="NotApplicable">
connectionists@mailman.srv.cs.cmu.edu</a> <<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" data-auth="NotApplicable">connectionists@mailman.srv.cs.cmu.edu</a>>; AIhub <<a href="mailto:aihuborg@gmail.com" data-auth="NotApplicable">aihuborg@gmail.com</a>><br>
<b>Subject:</b> Re: Connectionists: Stephen Hanson in conversation with Geoff Hinton</span>
</p>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
</div>
<div>
<div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Dear Steve,</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Thank you very much for your message and for the greetings. I will pass them on if an occasion arises. </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Regarding your question: The key problem I am trying to address and that, to the best of my knowledge, no connectionist system was able to solve so far is that of scaling the system's intelligence.
 For example, if the system is able to correctly recognize 100 different objects, how many additional resources are needed to double that to 200? All the empirical data show that connectionist systems scale poorly: Some of the best systems we have require 500x
 more resources in order to increase the intelligence by only 2x. I document this problem in the manuscript and even run some simulations to show that the worst performance is if connectionist systems need to solve a generalized XOR problem. </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">In contrast, the biological brain scales well. This I also quantify in the paper.</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">I will look at the publication that you mentioned. However, so far, I haven't seen a solution that scales well in intelligence.</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">My argument is that transient selection of subnetworks by the help of the mentioned proteins is how intelligence scaling is achieved in biological brains.</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">In short, intelligence scaling is the key problem that concerns me. I describe the intelligence scaling problem in more detail in this book that just came out a few weeks ago and that is
 written for practitioners in Data Scientist and AI: <a href="https://urldefense.com/v3/__https:/amzn.to/3IBxUpL__;!!IKRxdwAv5BmarQ!cM7OelbQmo7kSv-FbCDhn2SShsIA-odskZef8LwaywtWM6F_br-sLT6LQOyTXJlN69MI7cICfP-k$" data-auth="NotApplicable">https://amzn.to/3IBxUpL</a></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">I hope that this at least partly answers where I see the problems and what I am trying to solve. </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Greetings from Germany,</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Danko</p>
</div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><br clear="all">
</p>
<div>
<div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Dr. Danko Nikolić<br>
<a href="https://urldefense.com/v3/__http:/www.danko-nikolic.com__;!!IKRxdwAv5BmarQ!cM7OelbQmo7kSv-FbCDhn2SShsIA-odskZef8LwaywtWM6F_br-sLT6LQOyTXJlN69MI7U2YyYEx$" data-auth="NotApplicable">www.danko-nikolic.com</a><br>
<a href="https://urldefense.com/v3/__https:/www.linkedin.com/in/danko-nikolic/__;!!IKRxdwAv5BmarQ!cM7OelbQmo7kSv-FbCDhn2SShsIA-odskZef8LwaywtWM6F_br-sLT6LQOyTXJlN69MI7U7gOyI4$" data-auth="NotApplicable">https://www.linkedin.com/in/danko-nikolic/</a>
</p>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">--- A progress usually starts with an insight ---</p>
</div>
</div>
</div>
</div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
<div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">On Thu, Jul 14, 2022 at 3:30 PM Grossberg, Stephen <<a href="mailto:steve@bu.edu" data-auth="NotApplicable">steve@bu.edu</a>> wrote:</p>
</div>
<blockquote style="border-top:none; border-right:none; border-bottom:none; border-left:1pt solid rgb(204,204,204); padding:0in 0in 0in 6pt; margin:5pt 0in 5pt 4.8pt">
<div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">Dear Danko,</span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">I have just read your new article and would like to comment briefly about it. </span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">In your introductory remarks, you write:</span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">"However, connectionism did not yet produce a satisfactory explanation of how the mental emerges from the physical. A number
 of open problems remains ( 5,6,7,8). As a result, the explanatory gap between the mind and the brain remains wide open."</span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">I certainly believe that no theoretical explanation in science is ever complete. However, I also believe that "the
 explanatory gap between the mind and the brain" does not remain "wide open".</span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">My Magnum Opus, that was published in 2021, makes that belief clear in its title:</span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><b><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">Conscious Mind, Resonant Brain: How Each Brain Makes a Mind</span></b></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
<div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"><a href="https://urldefense.com/v3/__https:/www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552__;!!IKRxdwAv5BmarQ!cM7OelbQmo7kSv-FbCDhn2SShsIA-odskZef8LwaywtWM6F_br-sLT6LQOyTXJlN69MI7R1LMQs4$" data-auth="NotApplicable">https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552</a></span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">The book provides a self-contained and non-technical exposition in a conversational tone of many principled and unifying
 explanations of psychological and neurobiological data.</span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">In particular, it explains roles for the metabotropic glutamate receptors that you mention in your own work. See the
 text and figures around p. 521. This explanation unifies psychological, anatomical, neurophysiological, biophysical, and biochemical data about the processes under discussion.</span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">I have a very old-fashioned view about how to understand scientific theories. I get excited by theories that explain
 and predict more data than previous theories.</span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">Which of the data that I explain in my book, and support with quantitative computer simulations, can you also explain?</span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">What data can you explain, in the same quantitative sense, that you do not think the neural models in my book can explain?</span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">I would be delighted to discuss these issues further with you.</span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">If you are in touch with my old friend and esteemed colleague, Wolf Singer, please send him my warm regards. I cite
 the superb work that he and various of his collaborators have done in many places in my book.</span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">Best,</span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">Steve</span></p>
</div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
<div id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132x_gmail-m_1424330336660607890gmail-m_-1059863988750999534x_gmail-m_5352872017708287389x_gmail-m_-6350639325319810250Signature">
<div>
<div id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132x_gmail-m_1424330336660607890gmail-m_-1059863988750999534x_gmail-m_5352872017708287389x_gmail-m_-6350639325319810250divtagdefaultwrapper">
<div>
<div>
<div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">Stephen Grossberg</span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"><a href="https://urldefense.com/v3/__http:/en.wikipedia.org/wiki/Stephen_Grossberg__;!!IKRxdwAv5BmarQ!cM7OelbQmo7kSv-FbCDhn2SShsIA-odskZef8LwaywtWM6F_br-sLT6LQOyTXJlN69MI7WCfzm2O$" data-auth="NotApplicable"><span style="color:black">http://en.wikipedia.org/wiki/Stephen_Grossberg</span></a></span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"><a href="https://urldefense.com/v3/__http:/scholar.google.com/citations?user=3BIV70wAAAAJ&hl=en__;!!IKRxdwAv5BmarQ!cM7OelbQmo7kSv-FbCDhn2SShsIA-odskZef8LwaywtWM6F_br-sLT6LQOyTXJlN69MI7STP-049$" data-auth="NotApplicable"><span style="color:black">http://scholar.google.com/citations?user=3BIV70wAAAAJ&hl=en</span></a></span></p>
</div>
</div>
</div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"><a href="https://urldefense.com/v3/__https:/youtu.be/9n5AnvFur7I__;!!IKRxdwAv5BmarQ!cM7OelbQmo7kSv-FbCDhn2SShsIA-odskZef8LwaywtWM6F_br-sLT6LQOyTXJlN69MI7cSUm13c$" data-auth="NotApplicable"><span style="color:black">https://youtu.be/9n5AnvFur7I</span></a></span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"><a href="https://urldefense.com/v3/__https:/www.youtube.com/watch?v=_hBye6JQCh4__;!!IKRxdwAv5BmarQ!cM7OelbQmo7kSv-FbCDhn2SShsIA-odskZef8LwaywtWM6F_br-sLT6LQOyTXJlN69MI7f3rZv1z$" data-auth="NotApplicable"><span style="font-family:"Times New Roman",serif; color:black">https://www.youtube.com/watch?v=_hBye6JQCh4</span></a></span></p>
</div>
<div>
<p style="margin:0in 0in 12pt; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"><a href="https://urldefense.com/v3/__https:/www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552__;!!IKRxdwAv5BmarQ!cM7OelbQmo7kSv-FbCDhn2SShsIA-odskZef8LwaywtWM6F_br-sLT6LQOyTXJlN69MI7R1LMQs4$" data-auth="NotApplicable">https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552</a></span></p>
<div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"><br>
Wang Professor of Cognitive and Neural Systems</span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black; background:white">Director, Center for Adaptive Systems</span><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"><br>
Professor Emeritus of Mathematics & Statistics, </span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">       Psychological & Brain Sciences, and Biomedical Engineering</span></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black">Boston University<br>
<a href="https://urldefense.com/v3/__http:/sites.bu.edu/steveg__;!!IKRxdwAv5BmarQ!cM7OelbQmo7kSv-FbCDhn2SShsIA-odskZef8LwaywtWM6F_br-sLT6LQOyTXJlN69MI7UGZZw49$" data-auth="NotApplicable">sites.bu.edu/steveg</a><br>
<a href="mailto:steve@bu.edu" data-auth="NotApplicable">steve@bu.edu</a></span></p>
</div>
</div>
</div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><span style="font-size:18pt; font-family:Arial,sans-serif; color:black"> </span></p>
</div>
</div>
</div>
</div>
<div align="center" style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif; text-align:center">
<hr size="1" width="98%" align="center">
</div>
<div id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132x_gmail-m_1424330336660607890gmail-m_-1059863988750999534x_gmail-m_5352872017708287389x_gmail-m_-6350639325319810250divRplyFwdMsg">
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><b><span style="color:black">From:</span></b><span style="color:black"> Connectionists <<a href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu" data-auth="NotApplicable">connectionists-bounces@mailman.srv.cs.cmu.edu</a>>
 on behalf of Danko Nikolic <<a href="mailto:danko.nikolic@gmail.com" data-auth="NotApplicable">danko.nikolic@gmail.com</a>><br>
<b>Sent:</b> Thursday, July 14, 2022 6:05 AM<br>
<b>To:</b> Gary Marcus <<a href="mailto:gary.marcus@nyu.edu" data-auth="NotApplicable">gary.marcus@nyu.edu</a>><br>
<b>Cc:</b> <a href="mailto:connectionists@mailman.srv.cs.cmu.edu" data-auth="NotApplicable">
connectionists@mailman.srv.cs.cmu.edu</a> <<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" data-auth="NotApplicable">connectionists@mailman.srv.cs.cmu.edu</a>>; AIhub <<a href="mailto:aihuborg@gmail.com" data-auth="NotApplicable">aihuborg@gmail.com</a>><br>
<b>Subject:</b> Re: Connectionists: Stephen Hanson in conversation with Geoff Hinton</span>
</p>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
</div>
<div>
<div>
<div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Dear Gary and everyone,</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">I am continuing the discussion from where we left off a few months ago. Back then, some of us agreed that the problem of understanding remains unsolved.</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">As a reminder, the challenge for connectionism was to 1) learn with few examples and 2) apply the knowledge to a broad set of situations.</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">I am happy to announce that I have now finished a draft of a paper in which I propose how the brain is able to achieve that. The manuscript requires a bit of patience for two reasons: one
 is that the reader may be exposed for the first time to certain aspects of brain physiology. The second reason is that it may take some effort to understand the counterintuitive implications of the new ideas (this requires a different way of thinking than
 what we are used to based on connectionism).</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">In short, I am suggesting that instead of the connectionist paradigm, we adopt transient selection of subnetworks. The mechanisms that transiently select brain subnetworks are distributed
 all over the nervous system and, I argue, are our main machinery for thinking/cognition. The surprising outcome is that neural activation, which was central in connectionism, now plays only a supportive role, while the real 'workers' within the brain are the
 mechanisms for transient selection of subnetworks.</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">I also explain how I think transient selection achieves learning with only a few examples and how the learned knowledge is possible to apply to a broad set of situations.</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">The manuscript is made available to everyone and can be downloaded here: <a href="https://urldefense.com/v3/__https:/bit.ly/3IFs8Ug__;!!IKRxdwAv5BmarQ!cM7OelbQmo7kSv-FbCDhn2SShsIA-odskZef8LwaywtWM6F_br-sLT6LQOyTXJlN69MI7ZXZrzDK$" data-auth="NotApplicable">https://bit.ly/3IFs8Ug</a></p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">(I apologize for the neuroscience lingo, which I tried to minimize.)</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">It will likely take a wide effort to implement these concepts as an AI technology, provided my ideas do not have a major flaw in the first place. Does anyone see a flaw?</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Thanks.</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Danko</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><br clear="all">
</p>
<div>
<div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Dr. Danko Nikolić<br>
<a href="https://urldefense.com/v3/__http:/www.danko-nikolic.com__;!!IKRxdwAv5BmarQ!cM7OelbQmo7kSv-FbCDhn2SShsIA-odskZef8LwaywtWM6F_br-sLT6LQOyTXJlN69MI7U2YyYEx$" data-auth="NotApplicable">www.danko-nikolic.com</a><br>
<a href="https://urldefense.com/v3/__https:/www.linkedin.com/in/danko-nikolic/__;!!IKRxdwAv5BmarQ!cM7OelbQmo7kSv-FbCDhn2SShsIA-odskZef8LwaywtWM6F_br-sLT6LQOyTXJlN69MI7U7gOyI4$" data-auth="NotApplicable">https://www.linkedin.com/in/danko-nikolic/</a>
</p>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
</div>
</div>
</div>
</div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
<div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">On Thu, Feb 3, 2022 at 5:25 PM Gary Marcus <<a href="mailto:gary.marcus@nyu.edu" data-auth="NotApplicable">gary.marcus@nyu.edu</a>> wrote:</p>
</div>
<blockquote style="border-top:none; border-right:none; border-bottom:none; border-left:1pt solid rgb(204,204,204); padding:0in 0in 0in 6pt; margin:5pt 0in 5pt 4.8pt">
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Dear Danko,
</p>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Well said. I had a somewhat similar response to Jeff Dean’s 2021 TED talk, in which he said (paraphrasing from memory, because I don’t remember the precise words) that the famous 200 Quoc
 Le unsupervised model [<a href="https://urldefense.com/v3/__https:/static.googleusercontent.com/media/research.google.com/en/*archive/unsupervised_icml2012.pdf__;Lw!!IKRxdwAv5BmarQ!cM7OelbQmo7kSv-FbCDhn2SShsIA-odskZef8LwaywtWM6F_br-sLT6LQOyTXJlN69MI7er14eUz$" data-auth="NotApplicable">https://static.googleusercontent.com/media/research.google.com/en//archive/unsupervised_icml2012.pdf</a>]
 had learned the concept of a ca. In reality the model had clustered together some catlike images based on the image statistics that it had extracted, but it was a long way from a full, counterfactual-supporting concept of a cat, much as you describe below. </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">I fully agree with you that the reason for even having a semantics is as you put it, "to 1) learn with a few examples and 2) apply the knowledge to a broad set of situations.” GPT-3 sometimes
 gives the appearance of having done so, but it falls apart under close inspection, so the problem remains unsolved.</p>
</div>
<div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Gary</p>
<div>
<p style="margin:0in 0in 12pt; font-size:11pt; font-family:Calibri,sans-serif"> </p>
<blockquote style="margin-top:5pt; margin-bottom:5pt">
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">On Feb 3, 2022, at 3:19 AM, Danko Nikolic <<a href="mailto:danko.nikolic@gmail.com" data-auth="NotApplicable">danko.nikolic@gmail.com</a>> wrote:</p>
</div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
<div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">G. Hinton wrote: "I believe that any reasonable person would admit that if you ask a neural net to draw a picture of a hamster wearing a red hat and it draws such a picture, it understood the
 request." </p>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">I would like to suggest why drawing a hamster with a red hat does not necessarily imply understanding of the statement "hamster wearing a red hat".</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">To understand that "hamster wearing a red hat" would mean inferring, in newly emerging situations of this hamster, all the real-life implications that the red hat brings to the little animal.</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">What would happen to the hat if the hamster rolls on its back? (Would the hat fall off?)</p>
</div>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">What would happen to the red hat when the hamster enters its lair? (Would the hat fall off?)</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">What would happen to that hamster when it goes foraging? (Would the red hat have an influence on finding food?)</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">What would happen in a situation of being chased by a predator? (Would it be easier for predators to spot the hamster?)</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">...and so on.</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Countless many questions can be asked. One has understood "hamster wearing a red hat" only if one can answer reasonably well many of such real-life relevant questions. Similarly, a student
 has understood materias in a class only if they can apply the materials in real-life situations (e.g., applying Pythagora's theorem). If a student gives a correct answer to a multiple choice question, we don't know whether the student understood the material
 or whether this was just rote learning (often, it is rote learning). </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">I also suggest that understanding also comes together with effective learning: We store new information in such a way that we can recall it later and use it effectively  i.e., make good inferences
 in newly emerging situations based on this knowledge.</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">In short: Understanding makes us humans able to 1) learn with a few examples and 2) apply the knowledge to a broad set of situations. </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">No neural network today has such capabilities and we don't know how to give them such capabilities. Neural networks need large amounts of training examples that cover a large variety of situations
 and then the networks can only deal with what the training examples have already covered. Neural networks cannot extrapolate in that 'understanding' sense.</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">I suggest that understanding truly extrapolates from a piece of knowledge. It is not about satisfying a task such as translation between languages or drawing hamsters with hats. It is how
 you got the capability to complete the task: Did you only have a few examples that covered something different but related and then you extrapolated from that knowledge? If yes, this is going in the direction of understanding. Have you seen countless examples
 and then interpolated among them? Then perhaps it is not understanding.</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">So, for the case of drawing a hamster wearing a red hat, understanding perhaps would have taken place if the following happened before that:</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">1) first, the network learned about hamsters (not many examples)</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">2) after that the network learned about red hats (outside the context of hamsters and without many examples) </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">3) finally the network learned about drawing (outside of the context of hats and hamsters, not many examples)</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">After that, the network is asked to draw a hamster with a red hat. If it does it successfully, maybe we have started cracking the problem of understanding.</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Note also that this requires the network to learn sequentially without exhibiting catastrophic forgetting of the previous knowledge, which is possibly also a consequence of human learning
 by understanding.</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Danko</p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
<div>
<div>
<div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">Dr. Danko Nikolić<br>
<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__www.danko-2Dnikolic.com&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=HwOLDw6UCRzU5-FPSceKjtpNm7C6sZQU5kuGAMVbPaI&e=" data-auth="NotApplicable">www.danko-nikolic.com</a><br>
<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.linkedin.com_in_danko-2Dnikolic_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=b70c8lokmxM3Kz66OfMIM4pROgAhTJOAlp205vOmCQ8&e=" data-auth="NotApplicable">https://www.linkedin.com/in/danko-nikolic/</a>
</p>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">--- A progress usually starts with an insight ---</p>
</div>
</div>
</div>
</div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
</div>
</div>
<div id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132x_gmail-m_1424330336660607890gmail-m_-1059863988750999534x_gmail-m_5352872017708287389x_gmail-m_-6350639325319810250x_gmail-m_-1177896716680425671DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2">
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
<table border="1" cellspacing="4" cellpadding="0" style="border-right:none; border-bottom:none; border-left:none; border-top:1pt solid rgb(211,212,222)">
<tbody>
<tr>
<td width="55" style="width:41.25pt; border:none; padding:9.75pt 0.75pt 0.75pt">
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=" data-auth="NotApplicable"><span style="text-decoration:none"><img border="0" width="46" height="29" id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132x_gmail-m_1424330336660607890gmail-m_-1059863988750999534x__x0000_i1028" style="width:0.4791in; height:0.302in" src="https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif"></span></a></p>
</td>
<td width="470" style="width:352.5pt; border:none; padding:9pt 0.75pt 0.75pt">
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif; line-height:13.5pt">
<span style="font-size:10pt; font-family:Arial,sans-serif; color:rgb(65,66,78)">Virus-free.
<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=" data-auth="NotApplicable">
<span style="color:rgb(68,83,234)">www.avast.com</span></a> </span></p>
</td>
</tr>
</tbody>
</table>
</div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
<div>
<div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif">On Thu, Feb 3, 2022 at 9:55 AM Asim Roy <<a href="mailto:ASIM.ROY@asu.edu" data-auth="NotApplicable">ASIM.ROY@asu.edu</a>> wrote:</p>
</div>
<blockquote style="border-top:none; border-right:none; border-bottom:none; border-left:1pt solid rgb(204,204,204); padding:0in 0in 0in 6pt; margin:5pt 0in 5pt 4.8pt">
<div>
<div>
<p>Without getting into the specific dispute between Gary and Geoff, I think with approaches similar to GLOM, we are finally headed in the right direction. There’s plenty of neurophysiological evidence for single-cell abstractions and multisensory neurons in
 the brain, which one might claim correspond to symbols. And I think we can finally reconcile the decades old dispute between Symbolic AI and Connectionism.</p>
<p> </p>
<p><span style="color:black; background:yellow">GARY: (Your GLOM, which as you know I praised publicly, is in many ways an effort to wind up with encodings that effectively serve as symbols in exactly that way, guaranteed to serve as consistent representations
 of specific concepts.)</span></p>
<p><span style="color:black; background:yellow">GARY: I have <i>never</i> called for dismissal of neural networks, but rather for some hybrid between the two (as you yourself contemplated in 1991); the point of the 2001 book was to characterize exactly where
 multilayer perceptrons succeeded and broke down, and where symbols could complement them.</span></p>
<p> </p>
<p>Asim Roy</p>
<p>Professor, Information Systems</p>
<p>Arizona State University</p>
<p><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e=" data-auth="NotApplicable">Lifeboat
 Foundation Bios: Professor Asim Roy</a></p>
<p><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e=" data-auth="NotApplicable">Asim
 Roy | iSearch (asu.edu)</a></p>
<p> </p>
<p> </p>
<div>
<div style="border-right:none; border-bottom:none; border-left:none; border-top:1pt solid rgb(225,225,225); padding:3pt 0in 0in">
<p><b>From:</b> Connectionists <<a href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu" data-auth="NotApplicable">connectionists-bounces@mailman.srv.cs.cmu.edu</a>>
<b>On Behalf Of </b>Gary Marcus<br>
<b>Sent:</b> Wednesday, February 2, 2022 1:26 PM<br>
<b>To:</b> Geoffrey Hinton <<a href="mailto:geoffrey.hinton@gmail.com" data-auth="NotApplicable">geoffrey.hinton@gmail.com</a>><br>
<b>Cc:</b> AIhub <<a href="mailto:aihuborg@gmail.com" data-auth="NotApplicable">aihuborg@gmail.com</a>>;
<a href="mailto:connectionists@mailman.srv.cs.cmu.edu" data-auth="NotApplicable">
connectionists@mailman.srv.cs.cmu.edu</a><br>
<b>Subject:</b> Re: Connectionists: Stephen Hanson in conversation with Geoff Hinton</p>
</div>
</div>
<p> </p>
<div>
<div>
<p>Dear Geoff, and interested others,</p>
</div>
<div>
<p> </p>
</div>
<div>
<p>What, for example, would you make of a system that often drew the red-hatted hamster you requested, and perhaps a fifth of the time gave you utter nonsense?  Or say one that you trained to create birds but sometimes output stuff like this:</p>
</div>
<div>
<p> </p>
</div>
<div>
<p><image001.png></p>
</div>
<div>
<p> </p>
</div>
<div>
<p>One could </p>
</div>
<div>
<p> </p>
</div>
<div>
<p>a. avert one’s eyes and deem the anomalous outputs irrelevant</p>
</div>
<div>
<p>or</p>
</div>
<div>
<p>b. wonder if it might be possible that sometimes the system gets the right answer for the wrong reasons (eg partial historical contingency), and wonder whether another approach might be indicated.</p>
</div>
<div>
<p> </p>
</div>
<div>
<p>Benchmarks are harder than they look; most of the field has come to recognize that. The Turing Test has turned out to be a lousy measure of intelligence, easily gamed. It has turned out empirically that the Winograd Schema Challenge did not measure common
 sense as well as Hector might have thought. (As it happens, I am a minor coauthor of a very recent review on this very topic: <a href="https://urldefense.com/v3/__https:/arxiv.org/abs/2201.02387__;!!IKRxdwAv5BmarQ!INA0AMmG3iD1B8MDtLfjWCwcBjxO-e-eM2Ci9KEO_XYOiIEgiywK-G_8j6L3bHA$" data-auth="NotApplicable">https://arxiv.org/abs/2201.02387</a>)
 But its conquest in no way means machines now have common sense; many people from many different perspectives recognize that (including, e.g., Yann LeCun, who generally tends to be more aligned with you than with me).</p>
</div>
<div>
<p> </p>
</div>
<div>
<p>So: on the goalpost of the Winograd schema, I was wrong, and you can quote me; but what you said about me and machine translation remains your invention, and it is inexcusable that you simply ignored my 2019 clarification. On the essential goal of trying
 to reach meaning and understanding, I remain unmoved; the problem remains unsolved. </p>
</div>
<div>
<p> </p>
</div>
<div>
<p>All of the problems LLMs have with coherence, reliability, truthfulness, misinformation, etc stand witness to that fact. (Their persistent inability to filter out toxic and insulting remarks stems from the same.) I am hardly the only person in the field
 to see that progress on any given benchmark does not inherently mean that the deep underlying problems have solved. You, yourself, in fact, have occasionally made that point. </p>
</div>
<div>
<p> </p>
</div>
<div>
<p>With respect to embeddings: Embeddings are very good for natural language <i>processing</i>; but NLP is not the same as NL<i>U</i> – when it comes to
<i>understanding</i>, their worth is still an open question. Perhaps they will turn out to be necessary; they clearly aren’t sufficient. In their extreme, they might even collapse into being symbols, in the sense of uniquely identifiable encodings, akin to
 the ASCII code, in which a specific set of numbers stands for a specific word or concept. (Wouldn’t that be ironic?)</p>
</div>
<div>
<p> </p>
</div>
<div>
<p>(Your GLOM, which as you know I praised publicly, is in many ways an effort to wind up with encodings that effectively serve as symbols in exactly that way, guaranteed to serve as consistent representations of specific concepts.)</p>
</div>
<div>
<p> </p>
</div>
<div>
<p>Notably absent from your email is any kind of apology for misrepresenting my position. It’s fine to say that “many people thirty years ago once thought X” and another to say “Gary Marcus said X in 2015”, when I didn’t. I have consistently felt throughout
 our interactions that you have mistaken me for Zenon Pylyshyn; indeed, you once (at NeurIPS 2014) apologized to me for having made that error. I am still not he. </p>
</div>
<div>
<p> </p>
</div>
<div>
<p>Which maybe connects to the last point; if you read my work, you would see thirty years of arguments
<i>for</i> neural networks, just not in the way that you want them to exist. I have ALWAYS argued that there is a role for them;  characterizing me as a person “strongly opposed to neural networks” misses the whole point of my 2001 book, which was subtitled
 “Integrating Connectionism and Cognitive Science.”</p>
</div>
<div>
<p> </p>
</div>
<div>
<p>In the last two decades or so you have insisted (for reasons you have never fully clarified, so far as I know) on abandoning symbol-manipulation, but the reverse is not the case: I have
<i>never</i> called for dismissal of neural networks, but rather for some hybrid between the two (as you yourself contemplated in 1991); the point of the 2001 book was to characterize exactly where multilayer perceptrons succeeded and broke down, and where
 symbols could complement them. It’s a rhetorical trick (which is what the previous thread was about) to pretend otherwise.</p>
</div>
<div>
<p> </p>
</div>
<div>
<p>Gary</p>
</div>
<div>
<p> </p>
</div>
<div>
<p> </p>
</div>
<div>
<blockquote style="margin-top:5pt; margin-bottom:5pt">
<p style="margin-bottom:12pt">On Feb 2, 2022, at 11:22, Geoffrey Hinton <<a href="mailto:geoffrey.hinton@gmail.com" data-auth="NotApplicable">geoffrey.hinton@gmail.com</a>> wrote:</p>
</blockquote>
</div>
<blockquote style="margin-top:5pt; margin-bottom:5pt">
<div>
<p></p>
<div>
<p>Embeddings are just vectors of soft feature detectors and they are very good for NLP.  The quote on my webpage from Gary's 2015 chapter implies the opposite.</p>
<div>
<p> </p>
</div>
<div>
<p>A few decades ago, everyone I knew then would have agreed that the ability to translate a sentence into many different languages was strong evidence that you understood it.</p>
</div>
</div>
</div>
</blockquote>
<p style="margin-bottom:12pt"> </p>
<blockquote style="margin-top:5pt; margin-bottom:5pt">
<div>
<div>
<div>
<p>But once neural networks could do that, their critics moved the goalposts. An exception is Hector Levesque who defined the goalposts more sharply by saying that the ability to get pronoun references correct in Winograd sentences is a crucial test. Neural
 nets are improving at that but still have some way to go. Will Gary agree that when they can get pronoun references correct in Winograd sentences they really do understand? Or does he want to reserve the right to weasel out of that too?</p>
</div>
<div>
<p> </p>
</div>
<div>
<p>Some people, like Gary, appear to be strongly opposed to neural networks because they do not fit their preconceived notions of how the mind should work.</p>
</div>
<div>
<div>
<p>I believe that any reasonable person would admit that if you ask a neural net to draw a picture of a hamster wearing a red hat and it draws such a picture, it understood the request.</p>
</div>
<div>
<p> </p>
</div>
<div>
<p>Geoff</p>
</div>
<div>
<p> </p>
</div>
<div>
<p> </p>
</div>
<div>
<p> </p>
<div>
<p> </p>
</div>
</div>
</div>
</div>
<p> </p>
<div>
<div>
<p>On Wed, Feb 2, 2022 at 1:38 PM Gary Marcus <<a href="mailto:gary.marcus@nyu.edu" data-auth="NotApplicable">gary.marcus@nyu.edu</a>> wrote:</p>
</div>
<blockquote style="border-top:none; border-right:none; border-bottom:none; border-left:1pt solid rgb(204,204,204); padding:0in 0in 0in 6pt; margin:5pt 0in 5pt 4.8pt">
<div>
<div>
<div>
<div>
<p><span style="font-size:13pt; font-family:"Times New Roman",serif">Dear AI Hub, cc: Steven Hanson and Geoffrey Hinton, and the larger neural network community,</span></p>
</div>
<div>
<p><span style="font-size:13pt"> </span></p>
</div>
<div>
<p><span style="font-size:13pt; font-family:"Times New Roman",serif">There has been a lot of recent discussion on this list about framing and scientific integrity. Often the first step in restructuring narratives is to bully and dehumanize critics. The second
 is to misrepresent their position. People in positions of power are sometimes tempted to do this.</span></p>
</div>
<div>
<p><span style="font-size:13pt"> </span></p>
</div>
<div>
<p><span style="font-size:13pt; font-family:"Times New Roman",serif">The Hinton-Hanson interview that you just published is a real-time example of just that. It opens with a needless and largely content-free personal attack on a single scholar (me), with the
 explicit intention of discrediting that person. Worse, the only substantive thing it says is false.</span></p>
</div>
<div>
<p><span style="font-size:13pt"> </span></p>
</div>
<div>
<p><span style="font-size:13pt; font-family:"Times New Roman",serif">Hinton says “In 2015 he [Marcus] made a prediction that computers wouldn’t be able to do machine translation.”</span></p>
</div>
<div>
<p><span style="font-size:13pt"> </span></p>
</div>
<div>
<p><span style="font-size:13pt; font-family:"Times New Roman",serif">I never said any such thing. </span></p>
</div>
<div>
<p><span style="font-size:13pt"> </span></p>
</div>
<div>
<p><span style="font-size:13pt; font-family:"Times New Roman",serif">What I predicted, rather, was that multilayer perceptrons, as they existed then, would not (on their own, absent other mechanisms) <i>understand</i> language. Seven years later, they still
 haven’t, except in the most superficial way.   </span></p>
</div>
<div>
<p><span style="font-size:13pt"> </span></p>
</div>
<div>
<p><span style="font-size:13pt; font-family:"Times New Roman",serif">I made no comment whatsoever about machine translation, which I view as a separate problem, solvable to a certain degree by correspondance without semantics. </span></p>
</div>
<div>
<p><span style="font-size:13pt"> </span></p>
</div>
<div>
<p><span style="font-size:13pt; font-family:"Times New Roman",serif">I specifically tried to clarify Hinton’s confusion in 2019, but, disappointingly, he has continued to purvey misinformation despite that clarification. Here is what I wrote privately to him
 then, which should have put the matter to rest:</span></p>
</div>
<div>
<p><span style="font-size:13pt"> </span></p>
</div>
<div style="margin-left:27pt; font-stretch:normal">
<p><span style="font-size:13pt; font-family:"Times New Roman",serif">You have taken a single out of context quote [from 2015] and misrepresented it. The quote, which you have prominently displayed at the bottom on your own web page, says:</span></p>
</div>
<div style="margin-left:27pt; font-stretch:normal; min-height:22.9px">
<p><span style="font-size:13pt"> </span></p>
</div>
<div style="margin-left:0.75in; font-stretch:normal">
<p><span style="font-size:13pt; font-family:"Times New Roman",serif">Hierarchies of features are less suited to challenges such as language, inference, and high-level planning. For example, as Noam Chomsky famously pointed out, language is filled with sentences
 you haven't seen before. Pure classifier systems don't know what to do with such sentences. The talent of feature detectors -- in  identifying which member of some category something belongs to -- doesn't translate into understanding novel  sentences, in which
 each sentence has its own unique meaning. </span></p>
</div>
<div style="margin-left:27pt; font-stretch:normal; min-height:22.9px">
<p><span style="font-size:13pt"> </span></p>
</div>
<div style="margin-left:27pt; font-stretch:normal">
<p><span style="font-size:13pt; font-family:"Times New Roman",serif">It does <i>not</i> say "neural nets would not be able to deal with novel sentences"; it says that hierachies of features detectors (on their own, if you read the context of the essay) would
 have trouble <i>understanding </i>novel sentences.  </span></p>
</div>
<div style="margin-left:27pt; font-stretch:normal; min-height:22.9px">
<p><span style="font-size:13pt"> </span></p>
</div>
<div style="margin-left:27pt; font-stretch:normal">
<p><span style="font-size:13pt; font-family:"Times New Roman",serif">Google Translate does yet not <i>understand</i> the content of the sentences is translates. It cannot reliably answer questions about who did what to whom, or why, it cannot infer the order
 of the events in paragraphs, it can't determine the internal consistency of those events, and so forth.</span></p>
</div>
<div>
<p><span style="font-size:13pt"> </span></p>
</div>
<div>
<p><span style="font-size:13pt; font-family:"Times New Roman",serif">Since then, a number of scholars, such as the the computational linguist Emily Bender, have made similar points, and indeed current LLM difficulties with misinformation, incoherence and fabrication
 all follow from these concerns. Quoting from Bender’s prizewinning 2020 ACL article on the matter with Alexander Koller, <a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__aclanthology.org_2020.acl-2Dmain.463.pdf&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=K-Vl6vSvzuYtRMi-s4j7mzPkNRTb-I6Zmf7rbuKEBpk&e=" data-auth="NotApplicable">https://aclanthology.org/2020.acl-main.463.pdf</a>,
 also emphasizing issues of understanding and meaning:</span></p>
</div>
<div>
<p><span style="font-size:13pt"> </span></p>
</div>
<div style="margin-left:27pt; font-stretch:normal">
<p><i><span style="font-size:13pt; font-family:"Times New Roman",serif">The success of the large neural language models on many NLP tasks is exciting. However, we find that these successes sometimes lead to hype in which these models are being described as
 “understanding” language or capturing “meaning”. In this position paper, we argue that a system trained only on form has a priori no way to learn meaning. .. a clear understanding of the distinction between form and meaning will help guide the field towards
 better science around natural language understanding. </span></i></p>
</div>
<div>
<p><span style="font-size:13pt"> </span></p>
</div>
<div>
<p><span style="font-size:13pt; font-family:"Times New Roman",serif">Her later article with Gebru on language models “stochastic parrots” is in some ways an extension of this point; machine translation requires mimicry, true understanding (which is what I was
 discussing in 2015) requires something deeper than that. </span></p>
</div>
<div>
<p><span style="font-size:13pt"> </span></p>
</div>
<div>
<p><span style="font-size:13pt; font-family:"Times New Roman",serif">Hinton’s intellectual error here is in equating machine translation with the deeper comprehension that robust natural language understanding will require; as Bender and Koller observed, the
 two appear not to be the same. (There is a longer discussion of the relation between language understanding and machine translation, and why the latter has turned out to be more approachable than the former, in my 2019 book with Ernest Davis).</span></p>
</div>
<div>
<p><span style="font-size:13pt"> </span></p>
</div>
<div>
<p><span style="font-size:13pt; font-family:"Times New Roman",serif">More broadly, Hinton’s ongoing dismissiveness of research from perspectives other than his own (e.g. linguistics) have done the field a disservice. </span></p>
</div>
<div>
<p><span style="font-size:13pt"> </span></p>
</div>
<div>
<p><span style="font-size:13pt; font-family:"Times New Roman",serif">As Herb Simon once observed, science does not have to be zero-sum.</span></p>
</div>
<div>
<p><span style="font-size:13pt"> </span></p>
</div>
<div>
<p><span style="font-size:13pt; font-family:"Times New Roman",serif">Sincerely,</span></p>
</div>
<div>
<p><span style="font-size:13pt; font-family:"Times New Roman",serif">Gary Marcus</span></p>
</div>
<div>
<p><span style="font-size:13pt; font-family:"Times New Roman",serif">Professor Emeritus</span></p>
</div>
<div>
<p><span style="font-size:13pt; font-family:"Times New Roman",serif">New York University</span></p>
</div>
</div>
<div>
<p style="margin-bottom:12pt"> </p>
<blockquote style="margin-top:5pt; margin-bottom:5pt">
<p style="margin-bottom:12pt">On Feb 2, 2022, at 06:12, AIhub <<a href="mailto:aihuborg@gmail.com" data-auth="NotApplicable">aihuborg@gmail.com</a>> wrote:</p>
</blockquote>
</div>
<blockquote style="margin-top:5pt; margin-bottom:5pt">
<div>
<p></p>
<div>
<div>
<p>Stephen Hanson in conversation with Geoff Hinton</p>
</div>
<div>
<p> </p>
</div>
<div>
<p>In the latest episode of this video series for <a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__AIhub.org&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=eOtzMh8ILIH5EF7K20Ks4Fr27XfNV_F24bkj-SPk-2A&e=" data-auth="NotApplicable">
AIhub.org</a>, Stephen Hanson talks to  Geoff Hinton about neural networks, backpropagation, overparameterization, digit recognition, voxel cells, syntax and semantics, Winograd sentences, and more.</p>
<div>
<p> </p>
</div>
<div>
<p>You can watch the discussion, and read the transcript, here:<br clear="all">
</p>
<div>
<p><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__aihub.org_2022_02_02_what-2Dis-2Dai-2Dstephen-2Dhanson-2Din-2Dconversation-2Dwith-2Dgeoff-2Dhinton_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=OY_RYGrfxOqV7XeNJDHuzE--aEtmNRaEyQ0VJkqFCWw&e=" data-auth="NotApplicable">https://aihub.org/2022/02/02/what-is-ai-stephen-hanson-in-conversation-with-geoff-hinton/</a></p>
</div>
<div>
<p> </p>
</div>
<div>
<p><span style="font-family:Arial,sans-serif">About AIhub: </span></p>
</div>
<div>
<p><span style="font-family:Arial,sans-serif">AIhub is a non-profit dedicated to connecting the AI community to the public by providing free, high-quality information through
<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__AIhub.org&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=eOtzMh8ILIH5EF7K20Ks4Fr27XfNV_F24bkj-SPk-2A&e=" data-auth="NotApplicable">
AIhub.org</a> (<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__aihub.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=IKFanqeMi73gOiS7yD-X_vRx_OqDAwv1Il5psrxnhIA&e=" data-auth="NotApplicable">https://aihub.org/</a>).
 We help researchers publish the latest AI news, summaries of their work, opinion pieces, tutorials and more.  We are supported by many leading scientific organizations in AI, namely
<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__aaai.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=wBvjOWTzEkbfFAGNj9wOaiJlXMODmHNcoWO5JYHugS0&e=" data-auth="NotApplicable">
AAAI</a>, <a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__neurips.cc_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=3-lOHXyu8171pT_UE9hYWwK6ft4I-cvYkuX7shC00w0&e=" data-auth="NotApplicable">
NeurIPS</a>, <a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__icml.cc_imls_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=JJyjwIpPy9gtKrZzBMbW3sRMh3P3Kcw-SvtxG35EiP0&e=" data-auth="NotApplicable">
ICML</a>, <a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.journals.elsevier.com_artificial-2Dintelligence&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=eWrRCVWlcbySaH3XgacPpi0iR0-NDQYCLJ1x5yyMr8U&e=" data-auth="NotApplicable">
AIJ</a>/<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.journals.elsevier.com_artificial-2Dintelligence&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=eWrRCVWlcbySaH3XgacPpi0iR0-NDQYCLJ1x5yyMr8U&e=" data-auth="NotApplicable">IJCAI</a>,
<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__sigai.acm.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=7rC6MJFaMqOms10EYDQwfnmX-zuVNhu9fz8cwUwiLGQ&e=" data-auth="NotApplicable">
ACM SIGAI</a>, EurAI/AICOMM, <a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__claire-2Dai.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=66ZofDIhuDba6Fb0LhlMGD3XbBhU7ez7dc3HD5-pXec&e=" data-auth="NotApplicable">
CLAIRE</a> and <a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.robocup.org__&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=bBI6GRq--MHLpIIahwoVN8iyXXc7JAeH3kegNKcFJc0&e=" data-auth="NotApplicable">
RoboCup</a>.</span></p>
</div>
<div>
<p><span style="font-family:Arial,sans-serif">Twitter: @aihuborg</span></p>
</div>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</blockquote>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
</div>
<div id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132x_gmail-m_1424330336660607890gmail-m_-1059863988750999534x_gmail-m_5352872017708287389x_gmail-m_-6350639325319810250x_gmail-m_-1177896716680425671DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2">
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
<table border="1" cellspacing="4" cellpadding="0" style="border-right:none; border-bottom:none; border-left:none; border-top:1pt solid rgb(211,212,222)">
<tbody>
<tr>
<td width="55" style="width:41.25pt; border:none; padding:9.75pt 0.75pt 0.75pt">
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=" data-auth="NotApplicable"><span style="text-decoration:none"><img border="0" width="46" height="29" id="x_m_661575741301620185m_4572882607435561638gmail-m_2938935130873181132x_gmail-m_1424330336660607890gmail-m_-1059863988750999534x__x0000_i1027" style="width:0.4791in; height:0.302in" src="https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif"></span></a></p>
</td>
<td width="470" style="width:352.5pt; border:none; padding:9pt 0.75pt 0.75pt">
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif; line-height:13.5pt">
<span style="font-size:10pt; font-family:Arial,sans-serif; color:rgb(65,66,78)">Virus-free.
<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=" data-auth="NotApplicable">
<span style="color:rgb(68,83,234)">www.avast.com</span></a> </span></p>
</td>
</tr>
</tbody>
</table>
</div>
</div>
</blockquote>
</div>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"> </p>
<p style="margin:0in; font-size:11pt; font-family:Calibri,sans-serif"><br>
</p>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</body>
</html>