<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#ecca99">
<p><font size="+1">Gary,<br>
</font></p>
<p><font size="+1">You're minimizing my example, and frankly, I
don't believe DL models will "characterize some aspects of
Psychology reasonably well"... and before getting to your
netflix bingeing.. one of the serious problems with DL models,
say comparing to neural recordings, using correlation matrices
there seem to be some correspondence to make.. but sometimes
little more than a roscharch test. In general, I think it
will be hard to show good correspondence to decision processing,
episodic memory, compound stimulus conditioning, and various
perceptual illusions and transformations. In general the DL
focus on classification and translation has created models very
unlikely to easily model cognitive and perceptual phenomena.
Models like Grossberg are curated to account for specific
effects and over a lifetime have done a better job at making
sense of psychological/neural phenomena then any other neural
models I know about, whether one subscribes to the details of
the modeling is another issue. <br>
</font></p>
<p><font size="+1">So in the perceptual/cognition abstraction task I
discussed it is gob-smacking that JUST ADDING LAYERS solves this
really critical failure of backpropagation, barely noted by most
of the neural network community focused on better benchmarks.</font></p>
<p><font size="+1">As to Netflix titles. I agree, cognitive models
should be adaptable and responsive to updates that create more
predictive outcomes for the agent using them. This in no ways
means the cognitive model must be symbolic or rule based. This
was something that was true in the 1980s and perhaps truer
today.</font></p>
<p><font size="+1">This is clearly a critical aspect of GPT models..
what are the cognitive models that they are building or are they
just high dimensional, phrase-structure blobs that do similarity
analysis and return a nearby phrase-structure response, which
happens to sound good.<br>
</font></p>
<p><font size="+1">Steve<br>
</font></p>
<div class="moz-cite-prefix">On 2/7/22 5:07 PM, Gary Marcus wrote:<br>
</div>
<blockquote type="cite"
cite="mid:BB0190E0-193B-4AFC-9AE1-A7C0961E8D60@nyu.edu">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
Stephen,
<div class=""><br class="">
</div>
<div class="">I don’t doubt for a minute that deep learning can
characterize some aspects of psychology reasonably well; but
either it needs to expands its borders or else be used in
conjunction with other techniques. Take for example the name of
the new Netflix show</div>
<div class=""><br class="">
</div>
<div class=""><span style="margin: 0px; padding: 0px; border: 0px;
font-family: "Open Sans", sans-serif; font-style:
italic; font-stretch: inherit; line-height: inherit;
font-size: 16px; vertical-align: baseline; caret-color:
rgb(51, 51, 51); color: rgb(51, 51, 51);" class="">The Woman
in the House Across the Street from the Girl in the Window</span></div>
<div class=""><span style="margin: 0px; padding: 0px; border: 0px;
font-family: "Open Sans", sans-serif; font-style:
italic; font-stretch: inherit; line-height: inherit;
font-size: 16px; vertical-align: baseline; caret-color:
rgb(51, 51, 51); color: rgb(51, 51, 51);" class=""><br
class="">
</span></div>
<div class="">Most of us can infer, compositionally, from that
unusually long noun phrase, that the title is a description of
particular person, that the title is not a complete sentence,
and that the woman in question lives in a house; we also infer
that there is a second, distinct person (likely a child) across
the street, and so forth. We can also use some knowledge of
pragmatics to infer that the woman in question is likely to be
the protagonist in the show. Current systems still struggle with
that sort of thing. </div>
<div class=""><br class="">
</div>
<div class="">We can then watch the show (I watched a few minutes
of Episode 1) and quickly relate the title to the protagonist’s
mental state, start to develop a mental model of the
protagonist’s relation to her new neighbors, make inferences
about whether certain choices appear to be “within character”,
empathize with character or question her judgements, etc, all
with respect to a mental model that is rapidly encoded and
quickly modified.</div>
<div class=""><br class="">
</div>
<div class="">I think that an understanding of how people build
and modify such models would be extremely valuable (not just for
fiction for everyday reality), but I don’t see how deep learning
in its current form gives us much purchase on that. There is
plenty of precedent for the kind of mental processes I am
sketching (e.g Walter Kintsch’s work on text comprehension;
Kamp/Kratzer/Heim work on discourse representation, etc) from
psychological and linguistic perspectives, but almost no current
contact in the neural network community with these well-attested
psychological processes. </div>
<div class=""><br class="">
</div>
<div class="">Gary<br class="">
<div><br class="">
<blockquote type="cite" class="">
<div class="">On Feb 7, 2022, at 6:01 AM, Stephen José
Hanson <<a href="mailto:jose@rubic.rutgers.edu"
class="" moz-do-not-send="true">jose@rubic.rutgers.edu</a>>
wrote:</div>
<br class="Apple-interchange-newline">
<div class="">
<meta http-equiv="Content-Type" content="text/html;
charset=UTF-8" class="">
<div text="#000000" bgcolor="#ecca99" class="">
<p class=""><font class="" size="+1">Gary,</font></p>
<p class=""><font class="" size="+1">This is one of the
first posts of yours, that I can categorically agree
with!</font></p>
<p class=""><font class="" size="+1">I think building
cognitive models through *some* training regime or
focused sampling or architectures or something but
not explicit, for example. <br class="">
</font></p>
<p class=""><font class="" size="+1">The other
fundamental cognitive/perceptual capability in this
context is the ability of Neural Networks to do what
Shepard (1970; Garner 1970s), had modeled as
perceptual separable processing (finding parts) and
perceptual integral process (finding covariance and
structure).</font></p>
<p class=""><font class="" size="+1">Shepard argued
these fundamental perceptual processes were
dependent on development and learning. <br
class="">
</font></p>
<p class=""><font class="" size="+1">A task was created
with double dissociation of a categorization
problem. In one case: separable ( in effect,
uncorrelated features in the stimulus) were
presented in categorization task that required you
pay attention to at least 2 features at the same
time to categorize correctly ("condensation"). in
the other case: integral stimuli (in effect
correlated features in stimuli) were presented in a
categorization task that required you to ignore the
correlation and do categorize on 1 feature at a time
("filtration"). This produced a result that
separable stimuli were more quickly learned in
filtration tasks then integral stimuli in
condensation tasks. Non-intuitively, Separable
stimuli are learned more slowly in condensation
tasks then integral stimuli then in filtration
tasks. In other words attention to feature
structure could cause improvement in learning or
interference. Not that surprising.. however--<br
class="">
</font></p>
<p class=""><font class="" size="+1">In the 1980s NN
with single layers (Backprop) *could not* replicate
this simple problem indicating that the cognitive
model was somehow inadequate. Backprop simply
learned ALL task/stimuli parings at the same rate,
ignoring the subtle but critical difference. It
failed.<br class="">
</font></p>
<p class=""><font class="" size="+1">Recently we
(<a class="moz-txt-link-freetext"
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.frontiersin.org_articles_10.3389_fpsyg.2018.00374_full-3F-26utm-5Fsource-3DEmail-5Fto-5Fauthors-5F-26utm-5Fmedium-3DEmail-26utm-5Fcontent-3DT1-5F11.5e1-5Fauthor-26utm-5Fcampaign-3DEmail-5Fpublication-26field-3D-26journalName-3DFrontiers-5Fin-5FPsychology-26id-3D284733&d=DwMDaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=UoNnjRqVBL_CqkrYum3TEgMe4-81VubghRckfDUNtQ8tcpm40eCYjC7e9nor929C&s=U5bZxiHTjY26KZ_tX3LaoOk3ok6HI2wTaC5QOUqGbqc&e="
moz-do-not-send="true">https://www.frontiersin.org/articles/10.3389/fpsyg.2018.00374/full?&utm_source=Email_to_authors_&utm_medium=Email&utm_content=T1_11.5e1_author&utm_campaign=Email_publication&field=&journalName=Frontiers_in_Psychology&id=284733</a>)
were able to show that JUST BY ADDING LAYERS the DL
does match to human performance.</font></p>
<p class=""><font class="" size="+1">What are the layers
doing? We offer an possible explanation that needs
testing. Layers, appear to create a type of
buffer that allows the network to "curate", feature
detectors that are spatially distant from the input
(conv layer, for example), this curation comes in
various attention forms (something in that will
appear in a new paper--not enough room here), which
appears to qualitatively change the network
processing states, and cognitive capabilities.
Well, that's the claim. <br class="">
</font></p>
<p class=""><font class="" size="+1">The larger point,
is that apparently architectures interact with
learning rules, in ways that can cross this
symbolic/neural river of styx, without falling into
it.<br class="">
</font></p>
<p class=""><font class="" size="+1">Steve<br class="">
</font></p>
<p class=""><font class="" size="+1"><br class="">
</font></p>
<div class="moz-cite-prefix">On 2/5/22 10:38 AM, Gary
Marcus wrote:<br class="">
</div>
<blockquote type="cite"
cite="mid:537DF004-25CE-45A2-8155-D7E6018F4EE5@nyu.edu"
class="">
<meta http-equiv="content-type" content="text/html;
charset=UTF-8" class="">
<div dir="ltr" class="">There is no magic in
understanding, just computation that has been
realized in the wetware of humans and that
eventually can be realized in machines. But
understanding is not (just) learning.</div>
<div dir="ltr" class=""><br class="">
</div>
<div dir="ltr" class="">
<div style="margin: 0px; font-stretch: normal;
font-size: 17.5px; line-height: normal;
caret-color: rgb(0, 0, 0);" class=""><span
class="s1" style="font-family:
UICTFontTextStyleBody; font-size: 17.46px;">Understanding </span><span
style="font-family: UICTFontTextStyleBody;
font-size: 17.46px;" class="">incorporates (or
works in tandem with) learning - but also,
critically, in tandem with inference, <i
class="">and the development and maintenance
of cognitive models</i>.</span><span
style="font-family: UICTFontTextStyleBody;
font-size: 17.46px;" class=""> </span> <span
style="font-family: UICTFontTextStyleBody;
font-size: 17.46px;" class="">Part of developing
an understanding of cats in general is to learn
long term-knowledge about their properties, both
directly (e.g., through observation) and
indirectly (eg through learning facts</span><span
style="font-family: UICTFontTextStyleBody;
font-size: 17.46px;" class=""> </span> <span
style="font-family: UICTFontTextStyleBody;
font-size: 17.46px;" class="">about animals in
general that can be extended to cats), often
through inference (if all animals have DNA, and
a cat is an animal, it must also have DNA).</span><span
style="font-family: UICTFontTextStyleBody;
font-size: 17.46px;" class=""> </span><span
style="font-family: UICTFontTextStyleBody;
font-size: 17.46px;" class=""> </span><span
style="font-family: UICTFontTextStyleBody;
font-size: 17.46px;" class="">The understanding
of a particular cat also involves direct
observation, but also inference (eg</span><span
style="font-family: UICTFontTextStyleBody;
font-size: 17.46px;" class=""> </span> <span
style="font-family: UICTFontTextStyleBody;
font-size: 17.46px;" class="">one might surmise
that the reason that Fluffy is running about the
room is that Fluffy suspects there is a mouse
stirring somewhere nearby). </span><span
class="s3" style="font-family:
UICTFontTextStyleEmphasizedBody; font-weight:
bold; font-size: 17.46px;">But all of that, I
would say, is subservient to the construction of
cognitive models that can be routinely updated </span><span
class="s1" style="font-family:
UICTFontTextStyleBody; font-size: 17.46px;">(e.g.,
Fluffy is currently in the living room,
skittering about, perhaps looking for a mouse).</span></div>
<div style="margin: 0px; font-stretch: normal;
font-size: 17.5px; line-height: normal;
caret-color: rgb(0, 0, 0);" class=""><span
class="s1" style="font-family:
UICTFontTextStyleBody; font-size: 17.46px;"><br
class="">
</span></div>
<div style="margin: 0px; font-stretch: normal;
font-size: 17.5px; line-height: normal;
caret-color: rgb(0, 0, 0);" class=""><span
class="s1" style="font-family:
UICTFontTextStyleBody; font-size: 17.46px;"> In
humans, those dynamic, relational models, which
form part of an understanding, can support
inference (if Fluffy is in the living room, we
can infer that Fluffy is not outside, not lost,
etc). Without such models - which I think
represent a core part of understanding - AGI is
an unlikely prospect.</span></div>
<div style="margin: 0px; font-stretch: normal;
font-size: 17.5px; line-height: normal;
min-height: 22.9px; caret-color: rgb(0, 0, 0);"
class=""><span class="s1" style="font-family:
UICTFontTextStyleBody; font-size: 17.46px;"><br
class="">
</span></div>
<div style="margin: 0px; font-stretch: normal;
font-size: 17.5px; line-height: normal;
caret-color: rgb(0, 0, 0);" class=""><span
class="s1" style="font-family:
UICTFontTextStyleBody; font-size: 17.46px;">Current
neural networks, as it happens, are better at
acquiring long-term knowledge (cats have
whiskers) than they are at dynamically updating
cognitive models in real-time. LLMs like GPT-3
etc lack the kind of dynamic model that I am
describing. To a modest degree they can
approximate it on the basis of large samples of
texts, but their ultimate incoherence stems from
the fact that they do not have robust internal
cognitive models that they can update on the
fly. </span></div>
<div style="margin: 0px; font-stretch: normal;
font-size: 17.5px; line-height: normal;
min-height: 22.9px; caret-color: rgb(0, 0, 0);"
class=""><span class="s1" style="font-family:
UICTFontTextStyleBody; font-size: 17.46px;"></span><br
class="">
</div>
<div style="margin: 0px; font-stretch: normal;
font-size: 17.5px; line-height: normal;
caret-color: rgb(0, 0, 0);" class=""><span
class="s1" style="font-family:
UICTFontTextStyleBody; font-size: 17.46px;">Without
such cognitive models you can still capture some
aspects of understanding (eg predicting that
cats are likely to be furry), but things fall
apart quickly; inference is never reliable, and
coherence is fleeting.</span></div>
<div style="margin: 0px; font-stretch: normal;
font-size: 17.5px; line-height: normal;
min-height: 22.9px; caret-color: rgb(0, 0, 0);"
class=""><span class="s1" style="font-family:
UICTFontTextStyleBody; font-size: 17.46px;"></span><br
class="">
</div>
<div style="margin: 0px; font-stretch: normal;
font-size: 17.5px; line-height: normal;
caret-color: rgb(0, 0, 0);" class=""><span
class="s1" style="font-family:
UICTFontTextStyleBody; font-size: 17.46px;">As a
final note, one of the most foundational
challenges in constructing adequate cognitive
models of the world is to have a clear
distinction between individuals and kinds; as I
emphasized 20 years ago (in The Algebraic Mind),
this has always been a weakness in neural
networks, and I don’t think that the type-token
problem has yet been solved. </span></div>
</div>
<div dir="ltr" class=""><br class="">
</div>
<div dir="ltr" class="">Gary</div>
<div dir="ltr" class=""><br class="">
</div>
<div dir="ltr" class=""><br class="">
<blockquote type="cite" class="">On Feb 5, 2022, at
01:31, Asim Roy <a class="moz-txt-link-rfc2396E"
href="mailto:ASIM.ROY@asu.edu"
moz-do-not-send="true"><ASIM.ROY@asu.edu></a>
wrote:<br class="">
<br class="">
</blockquote>
</div>
<blockquote type="cite" class="">
<div dir="ltr" class="">
<meta http-equiv="Content-Type"
content="text/html; charset=UTF-8" class="">
<meta name="Generator" content="Microsoft Word 15
(filtered medium)" class="">
<!--[if !mso]><style>v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style><![endif]-->
<style class="">@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}span.EmailStyle18
{mso-style-type:personal-reply;
font-family:"Calibri",sans-serif;
color:windowtext;}.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;}div.WordSection1
{page:WordSection1;}</style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
<div class="WordSection1">
<p class="MsoNormal">All,<o:p class=""></o:p></p>
<p class="MsoNormal"><o:p class=""> </o:p></p>
<p class="MsoNormal">I think the broader
question was “understanding.” Here are two
Youtube videos showing simple robots
“learning” to walk. They are purely physical
systems. Do they “understand” anything – such
as the need to go around an obstacle, jumping
over an obstacle, walking up and down stairs
and so on? By the way, they “learn” to do
these things on their own, literally
unsupervised, very much like babies. The basic
question is: what is “understanding” if not
“learning?” Is there some other mechanism
(magic) at play in our brain that helps us
“understand?” <o:p class=""></o:p></p>
<p class="MsoNormal"><o:p class=""> </o:p></p>
<p class="MsoNormal"><a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.youtube.com_watch-3Fv-3Dgn4nRCC9TwQ&d=DwMGaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=Knv_0zpl6J7FTpxevgUOS8qJpyvPOjpOXdLYhyOr6PnKQiWgHaftEAfPvwWb_IAB&s=zdQA6enDajD46kwz-nti6FBklz-72dzlA9NLEzRW1TY&e="
moz-do-not-send="true" class="">https://www.youtube.com/watch?v=gn4nRCC9TwQ</a><o:p
class=""></o:p></p>
<p class="MsoNormal"><a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.youtube.com_watch-3Fv-3D8sO7VS3q8d0&d=DwMGaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=Knv_0zpl6J7FTpxevgUOS8qJpyvPOjpOXdLYhyOr6PnKQiWgHaftEAfPvwWb_IAB&s=PRhn1hhcfzNtbKXIZpOAM4lyyMp39202wE7Uu4MWg5M&e="
moz-do-not-send="true" class="">https://www.youtube.com/watch?v=8sO7VS3q8d0</a><o:p
class=""></o:p></p>
<p class="MsoNormal"><o:p class=""> </o:p></p>
<p class="MsoNormal"><o:p class=""> </o:p></p>
<p class="MsoNormal">Asim Roy<o:p class=""></o:p></p>
<p class="MsoNormal">Professor, Information
Systems<o:p class=""></o:p></p>
<p class="MsoNormal">Arizona State University<o:p
class=""></o:p></p>
<p class="MsoNormal"><a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e="
target="_blank" moz-do-not-send="true"
class="">Lifeboat Foundation Bios: Professor
Asim Roy</a><o:p class=""></o:p></p>
<p class="MsoNormal"><a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e="
target="_blank" moz-do-not-send="true"
class="">Asim Roy | iSearch (asu.edu)</a><o:p
class=""></o:p></p>
<p class="MsoNormal"><o:p class=""> </o:p></p>
<p class="MsoNormal"><o:p class=""> </o:p></p>
<p class="MsoNormal"><o:p class=""> </o:p></p>
<p class="MsoNormal"><o:p class=""> </o:p></p>
<div style="border:none;border-top:solid #E1E1E1
1.0pt;padding:3.0pt 0in 0in 0in" class="">
<p class="MsoNormal"><b class="">From:</b> Ali
Minai <a class="moz-txt-link-rfc2396E"
href="mailto:minaiaa@gmail.com"
moz-do-not-send="true"><minaiaa@gmail.com></a>
<br class="">
<b class="">Sent:</b> Friday, February 4,
2022 11:38 PM<br class="">
<b class="">To:</b> Asim Roy <a
class="moz-txt-link-rfc2396E"
href="mailto:ASIM.ROY@asu.edu"
moz-do-not-send="true"><ASIM.ROY@asu.edu></a><br
class="">
<b class="">Cc:</b> Gary Marcus <a
class="moz-txt-link-rfc2396E"
href="mailto:gary.marcus@nyu.edu"
moz-do-not-send="true"><gary.marcus@nyu.edu></a>;
Danko Nikolic <a
class="moz-txt-link-rfc2396E"
href="mailto:danko.nikolic@gmail.com"
moz-do-not-send="true"><danko.nikolic@gmail.com></a>;
Brad Wyble <a class="moz-txt-link-rfc2396E"
href="mailto:bwyble@gmail.com"
moz-do-not-send="true"><bwyble@gmail.com></a>;
<a class="moz-txt-link-abbreviated"
href="mailto:connectionists@mailman.srv.cs.cmu.edu"
moz-do-not-send="true">connectionists@mailman.srv.cs.cmu.edu</a>;
AIhub <a class="moz-txt-link-rfc2396E"
href="mailto:aihuborg@gmail.com"
moz-do-not-send="true"><aihuborg@gmail.com></a><br
class="">
<b class="">Subject:</b> Re: Connectionists:
Stephen Hanson in conversation with Geoff
Hinton<o:p class=""></o:p></p>
</div>
<p class="MsoNormal"><o:p class=""> </o:p></p>
<div class="">
<div class="">
<p class="MsoNormal">Asim<o:p class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"><o:p class=""> </o:p></p>
</div>
<div class="">
<p class="MsoNormal">Of course there's
nothing magical about understanding, and
the mind has to emerge from the physical
system, but our AI models at this point
are not even close to realizing how that
happens. We are, at best, simulating a
superficial approximation of a few parts
of the real thing. A single, integrated
system where all the aspects of
intelligence emerge from the same deep,
well-differentiated physical substrate is
far beyond our capacity. Paying more
attention to neurobiology will be
essential to get there, but so will paying
attention to development - both physical
and cognitive - and evolution. The
configuration of priors by evolution is
key to understanding how real intelligence
learns so quickly and from so little. This
is not an argument for using genetic
algorithms to design our systems, just for
understanding the tricks evolution has
used and replicating them by design.
Development is more feasible to do
computationally, but hardly any models
have looked at it except in a superficial
sense. Nature creates basic intelligence
not so much by configuring functions by
explicit training as by tweaking,
modulating, ramifying, and combining
existing ones in a multi-scale
self-organization process. We then learn
much more complicated things (like playing
chess) by exploiting that substrate, and
using explicit instruction or learning by
practice. The fundamental lesson of
complex systems is that complexity is
built in stages - each level exploiting
the organization of the level below it. We
see it in evolution, development, societal
evolution, the evolution of technology,
etc. Our approach in AI, in contrast, is
to initialize a giant, naive system and
train it to do something really
complicated - but really specific - by
training the hell out of it. Sure, now we
do build many systems on top of
pre-trained models like GPT-3 and BERT,
which is better, but those models were
again trained by the same none-to-all
process I decried above. Contrast that
with how humans acquire language, and how
they integrate it into their *entire*
perceptual, cognitive, and behavioral
repertoire, not focusing just on this or
that task. The age of symbolic AI may have
passed, but the reductionistic mindset has
not. We cannot build minds by chopping it
into separate verticals.<o:p class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"><o:p class=""> </o:p></p>
</div>
<div class="">
<p class="MsoNormal">FTR, I'd say that the
emergence of models such as GLOM and
Hawkins and Ahmed's "thousand brains" is a
hopeful sign. They may not be "right", but
they are, I think, looking in the right
direction. With a million miles to go!<o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"><o:p class=""> </o:p></p>
</div>
<div class="">
<p class="MsoNormal">Ali<o:p class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"><o:p class=""> </o:p></p>
</div>
<div class="">
<div class="">
<div class="">
<div class="">
<div class="">
<div class="">
<div class="">
<div class="">
<div class="">
<div class="">
<div class="">
<div class="">
<div class="">
<div class="">
<div class="">
<p
class="MsoNormal"><b
class="">Ali
A. Minai,
Ph.D.</b><br
class="">
Professor and
Graduate
Program
Director<br
class="">
Complex
Adaptive
Systems Lab<br
class="">
Department of
Electrical
Engineering
& Computer
Science<o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal">828
Rhodes Hall<o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal">University
of Cincinnati<br
class="">
Cincinnati, OH
45221-0030<o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"><br
class="">
Phone: (513)
556-4783<br
class="">
Fax: (513)
556-7326<br
class="">
Email: <a
href="mailto:Ali.Minai@uc.edu"
target="_blank" moz-do-not-send="true" class="">Ali.Minai@uc.edu</a><br
class="">
<a
href="mailto:minaiaa@gmail.com" target="_blank" moz-do-not-send="true"
class="">minaiaa@gmail.com</a><br
class="">
<br class="">
WWW: <a
href="https://urldefense.com/v3/__http:/www.ece.uc.edu/*7Eaminai/__;JQ!!IKRxdwAv5BmarQ!Jd2XhTzWg6HDp9IPjlyNv847sUdhGDNfsnqZQ0gy1_mu-CfyUdpBMswhfqdbZTo$"
target="_blank" moz-do-not-send="true" class="">
https://eecs.ceas.uc.edu/~aminai/</a><o:p class=""></o:p></p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<p class="MsoNormal"><o:p class=""> </o:p></p>
</div>
</div>
<p class="MsoNormal"><o:p class=""> </o:p></p>
<div class="">
<div class="">
<p class="MsoNormal">On Fri, Feb 4, 2022 at
2:42 AM Asim Roy <<a
href="mailto:ASIM.ROY@asu.edu"
moz-do-not-send="true" class="">ASIM.ROY@asu.edu</a>>
wrote:<o:p class=""></o:p></p>
</div>
<blockquote
style="border:none;border-left:solid #CCCCCC
1.0pt;padding:0in 0in 0in
6.0pt;margin-left:4.8pt;margin-right:0in"
class="">
<div class="">
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">First
of all, the brain is a physical
system. There is no “magic” inside the
brain that does the “understanding”
part. Take for example learning to
play tennis. You hit a few balls -
some the right way and some wrong –
but you fairly quickly learn to hit
them right most of the time. So there
is obviously some simulation going on
in the brain about hitting the ball in
different ways and “learning” its
consequences. What you are calling
“understanding” is really these
simulations about different scenarios.
It’s also very similar to augmentation
used to train image recognition
systems where you rotate images,
obscure parts and so on, so that you
still can say it’s a cat even though
you see only the cat’s face or
whiskers or a cat flipped on its back.
So, if the following questions relate
to “understanding,” you can easily
resolve this by simulating such
scenarios when “teaching” the system.
There’s nothing “magical” about
“understanding.” As I said, bear in
mind that the brain, after all, is a
physical system and “teaching” and
“understanding” is embodied in that
physical system, not outside it. So
“understanding” is just part of
“learning,” nothing more.<o:p class=""></o:p></p>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="background-color: yellow;"
class="">DANKO:</span><o:p class=""></o:p></p>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="background-color: yellow;"
class="">What would happen to the
hat if the hamster rolls on its
back? (Would the hat fall off?)</span><o:p
class=""></o:p></p>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="background-color: yellow;"
class="">What would happen to the
red hat when the hamster enters its
lair? (Would the hat fall off?)</span><o:p
class=""></o:p></p>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="background-color: yellow;"
class="">What would happen to that
hamster when it goes foraging?
(Would the red hat have an influence
on finding food?)</span><o:p
class=""></o:p></p>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="background-color: yellow;"
class="">What would happen in a
situation of being chased by a
predator? (Would it be easier for
predators to spot the hamster?)</span><o:p
class=""></o:p></p>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">Asim
Roy<o:p class=""></o:p></p>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">Professor,
Information Systems<o:p class=""></o:p></p>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">Arizona
State University<o:p class=""></o:p></p>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e="
target="_blank"
moz-do-not-send="true" class="">Lifeboat
Foundation Bios: Professor Asim Roy</a><o:p
class=""></o:p></p>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e="
target="_blank"
moz-do-not-send="true" class="">Asim
Roy | iSearch (asu.edu)</a><o:p
class=""></o:p></p>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
<div class="">
<div
style="border:none;border-top:solid
windowtext 1.0pt;padding:3.0pt 0in
0in 0in;border-color:currentcolor
currentcolor" class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><b
class="">From:</b> Gary Marcus
<<a
href="mailto:gary.marcus@nyu.edu"
target="_blank"
moz-do-not-send="true" class="">gary.marcus@nyu.edu</a>>
<br class="">
<b class="">Sent:</b> Thursday,
February 3, 2022 9:26 AM<br
class="">
<b class="">To:</b> Danko Nikolic
<<a
href="mailto:danko.nikolic@gmail.com"
target="_blank"
moz-do-not-send="true" class="">danko.nikolic@gmail.com</a>><br
class="">
<b class="">Cc:</b> Asim Roy <<a
href="mailto:ASIM.ROY@asu.edu"
target="_blank"
moz-do-not-send="true" class="">ASIM.ROY@asu.edu</a>>;
Geoffrey Hinton <<a
href="mailto:geoffrey.hinton@gmail.com"
target="_blank"
moz-do-not-send="true" class="">geoffrey.hinton@gmail.com</a>>;
AIhub <<a
href="mailto:aihuborg@gmail.com"
target="_blank"
moz-do-not-send="true" class="">aihuborg@gmail.com</a>>;
<a
href="mailto:connectionists@mailman.srv.cs.cmu.edu"
target="_blank"
moz-do-not-send="true" class="">connectionists@mailman.srv.cs.cmu.edu</a><br
class="">
<b class="">Subject:</b> Re:
Connectionists: Stephen Hanson in
conversation with Geoff Hinton<o:p
class=""></o:p></p>
</div>
</div>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">Dear
Danko,<o:p class=""></o:p></p>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">Well
said. I had a somewhat similar
response to Jeff Dean’s 2021 TED
talk, in which he said (paraphrasing
from memory, because I don’t
remember the precise words) that the
famous 200 Quoc Le unsupervised
model [<a
href="https://urldefense.com/v3/__https:/static.googleusercontent.com/media/research.google.com/en/*archive/unsupervised_icml2012.pdf__;Lw!!IKRxdwAv5BmarQ!PFl2URDWVshfy1BPSwAMXKYyn1wszxpN4EPzShAm3sX83AOt05MQX07oVyVLEqo$"
target="_blank"
moz-do-not-send="true" class="">https://static.googleusercontent.com/media/research.google.com/en//archive/unsupervised_icml2012.pdf</a>]
had learned the concept of a ca. In
reality the model had clustered
together some catlike images based
on the image statistics that it had
extracted, but it was a long way
from a full,
counterfactual-supporting concept of
a cat, much as you describe below. <o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">I
fully agree with you that the reason
for even having a semantics is as
you put it, "to 1) learn with a few
examples and 2) apply the knowledge
to a broad set of situations.” GPT-3
sometimes gives the appearance of
having done so, but it falls apart
under close inspection, so the
problem remains unsolved.<o:p
class=""></o:p></p>
</div>
<div class="">
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">Gary<o:p
class=""></o:p></p>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;margin-bottom:12.0pt"><o:p
class=""> </o:p></p>
<blockquote
style="margin-top:5.0pt;margin-bottom:5.0pt"
class="">
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">On
Feb 3, 2022, at 3:19 AM,
Danko Nikolic <<a
href="mailto:danko.nikolic@gmail.com"
target="_blank"
moz-do-not-send="true"
class="">danko.nikolic@gmail.com</a>>
wrote:<o:p class=""></o:p></p>
</div>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
<div class="">
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">G.
Hinton wrote: "I believe
that any reasonable person
would admit that if you
ask a neural net to draw a
picture of a hamster
wearing a red hat and it
draws such a picture, it
understood the request."<o:p
class=""></o:p></p>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">I
would like to suggest
why drawing a hamster
with a red hat does not
necessarily imply
understanding of the
statement "hamster
wearing a red hat".<o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">To
understand that "hamster
wearing a red hat" would
mean inferring, in
newly emerging
situations of this
hamster, all the
real-life
implications that the
red hat brings to the
little animal.<o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">What
would happen to the
hat if the hamster
rolls on its back?
(Would the hat fall
off?)<o:p class=""></o:p></p>
</div>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">What
would happen to the red
hat when the hamster
enters its lair? (Would
the hat fall off?)<o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">What
would happen to that
hamster when it goes
foraging? (Would the red
hat have an influence on
finding food?)<o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">What
would happen in a
situation of being
chased by a predator?
(Would it be easier for
predators to spot the
hamster?)<o:p class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">...and
so on.<o:p class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">Countless
many questions can be
asked. One has
understood "hamster
wearing a red hat" only
if one can answer
reasonably well many of
such real-life relevant
questions. Similarly, a
student has understood
materias in a class only
if they can apply the
materials in real-life
situations (e.g.,
applying Pythagora's
theorem). If a student
gives a correct answer
to a multiple choice
question, we don't know
whether the student
understood the material
or whether this was just
rote learning (often, it
is rote learning). <o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">I
also suggest that
understanding also comes
together with effective
learning: We store new
information in such a
way that we can recall
it later and use it
effectively i.e., make
good inferences in newly
emerging situations
based on this knowledge.<o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">In
short: Understanding
makes us humans able to
1) learn with a few
examples and 2) apply
the knowledge to a broad
set of situations. <o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">No
neural network today has
such capabilities and we
don't know how to give
them such capabilities.
Neural networks need
large amounts of
training examples that
cover a large variety of
situations and then
the networks can only
deal with what the
training examples have
already covered. Neural
networks cannot
extrapolate in that
'understanding' sense.<o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">I
suggest that
understanding truly
extrapolates from a
piece of knowledge. It
is not about satisfying
a task such as
translation between
languages or drawing
hamsters with hats. It
is how you got the
capability to complete
the task: Did you only
have a few examples that
covered something
different but related
and then you
extrapolated from that
knowledge? If yes, this
is going in the
direction of
understanding. Have you
seen countless examples
and then interpolated
among them? Then perhaps
it is not understanding.<o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">So,
for the case of drawing
a hamster wearing a red
hat, understanding
perhaps would have taken
place if the following
happened before that:<o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">1)
first, the network
learned about hamsters
(not many examples)<o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">2)
after that the network
learned about red hats
(outside the context of
hamsters and without
many examples) <o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">3)
finally the network
learned about drawing
(outside of the context
of hats and hamsters,
not many examples)<o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">After
that, the network is
asked to draw a hamster
with a red hat. If it
does it successfully,
maybe we have started
cracking the problem of
understanding.<o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">Note
also that this requires
the network to learn
sequentially without
exhibiting catastrophic
forgetting of the
previous knowledge,
which is possibly also a
consequence of human
learning by
understanding.<o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">Danko<o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<div class="">
<div class="">
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">Dr. Danko
Nikolić<br
class="">
<a
href="https://urldefense.proofpoint.com/v2/url?u=http-3A__www.danko-2Dnikolic.com&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=HwOLDw6UCRzU5-FPSceKjtpNm7C6sZQU5kuGAMVbPaI&e="
target="_blank"
moz-do-not-send="true" class="">www.danko-nikolic.com</a><br class="">
<a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.linkedin.com_in_danko-2Dnikolic_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=b70c8lokmxM3Kz66OfMIM4pROgAhTJOAlp205vOmCQ8&e="
target="_blank"
moz-do-not-send="true" class="">https://www.linkedin.com/in/danko-nikolic/</a><o:p
class=""></o:p></p>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">--- A
progress usually
starts with an
insight ---<o:p
class=""></o:p></p>
</div>
</div>
</div>
</div>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
</div>
<div
id="gmail-m_3776411903040420401DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2"
class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
<table
class="MsoNormalTable"
style="border:none;border-top:solid
windowtext
1.0pt;border-color:currentcolor
currentcolor"
cellpadding="0" border="1">
<tbody class="">
<tr class="">
<td
style="width:41.25pt;border:none;padding:9.75pt
.75pt .75pt .75pt"
class="" width="55">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e="
target="_blank"
moz-do-not-send="true" class=""><span style="text-decoration:none"
class=""><img
style="width:.4791in;height:.302in" id="_x0000_i1026"
src="https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif"
data-unique-identifier="" moz-do-not-send="true" class="" width="46"
height="29"
border="0"></span></a><o:p
class=""></o:p></p>
</td>
<td
style="width:352.5pt;border:none;padding:9.0pt
.75pt .75pt .75pt"
class="" width="470">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;line-height:13.5pt"><span
style="font-size:10.0pt;font-family:"Arial",sans-serif;color:#41424E"
class="">Virus-free.
<a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e="
target="_blank" moz-do-not-send="true" class=""> <span
style="color:#4453EA"
class="">www.avast.com</span></a>
</span><o:p
class=""></o:p></p>
</td>
</tr>
</tbody>
</table>
</div>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
<div class="">
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">On
Thu, Feb 3, 2022 at 9:55
AM Asim Roy <<a
href="mailto:ASIM.ROY@asu.edu"
target="_blank"
moz-do-not-send="true"
class="">ASIM.ROY@asu.edu</a>>
wrote:<o:p class=""></o:p></p>
</div>
<blockquote
style="border:none;border-left:solid
windowtext
1.0pt;padding:0in 0in 0in
6.0pt;margin-left:4.8pt;margin-top:5.0pt;margin-right:0in;margin-bottom:5.0pt;border-color:currentcolor
currentcolor currentcolor
rgb(204,204,204)" class="">
<div class="">
<div class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">Without
getting into the
specific dispute
between Gary and
Geoff, I think with
approaches similar
to GLOM, we are
finally headed in
the right direction.
There’s plenty of
neurophysiological
evidence for
single-cell
abstractions and
multisensory neurons
in the brain, which
one might claim
correspond to
symbols. And I think
we can finally
reconcile the
decades old dispute
between Symbolic AI
and Connectionism.<o:p
class=""></o:p></p>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="background-color: yellow;" class="">GARY: (Your GLOM, which as
you know I praised
publicly, is in
many ways an
effort to wind up
with encodings
that effectively
serve as symbols
in exactly that
way, guaranteed to
serve as
consistent
representations of
specific
concepts.)</span><o:p
class=""></o:p></p>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="background-color: yellow;" class="">GARY: I have <i class="">never</i>
called for
dismissal of
neural networks,
but rather for
some hybrid
between the two
(as you yourself
contemplated in
1991); the point
of the 2001 book
was to
characterize
exactly where
multilayer
perceptrons
succeeded and
broke down, and
where symbols
could complement
them.</span><o:p
class=""></o:p></p>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">Asim
Roy<o:p class=""></o:p></p>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">Professor,
Information Systems<o:p
class=""></o:p></p>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">Arizona
State University<o:p
class=""></o:p></p>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e="
target="_blank"
moz-do-not-send="true"
class="">Lifeboat
Foundation Bios:
Professor Asim Roy</a><o:p
class=""></o:p></p>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e="
target="_blank"
moz-do-not-send="true"
class="">Asim Roy
| iSearch
(asu.edu)</a><o:p
class=""></o:p></p>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
<div class="">
<div
style="border:none;border-top:solid
windowtext
1.0pt;padding:3.0pt
0in 0in
0in;border-color:currentcolor
currentcolor"
class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><b class="">From:</b>
Connectionists
<<a
href="mailto:connectionists-bounces@mailman.srv.cs.cmu.edu"
target="_blank" moz-do-not-send="true" class="">connectionists-bounces@mailman.srv.cs.cmu.edu</a>>
<b class="">On
Behalf Of </b>Gary
Marcus<br
class="">
<b class="">Sent:</b>
Wednesday,
February 2, 2022
1:26 PM<br
class="">
<b class="">To:</b>
Geoffrey Hinton
<<a
href="mailto:geoffrey.hinton@gmail.com"
target="_blank" moz-do-not-send="true" class="">geoffrey.hinton@gmail.com</a>><br
class="">
<b class="">Cc:</b>
AIhub <<a
href="mailto:aihuborg@gmail.com"
target="_blank" moz-do-not-send="true" class="">aihuborg@gmail.com</a>>;
<a
href="mailto:connectionists@mailman.srv.cs.cmu.edu"
target="_blank" moz-do-not-send="true" class="">connectionists@mailman.srv.cs.cmu.edu</a><br
class="">
<b class="">Subject:</b>
Re:
Connectionists:
Stephen Hanson
in conversation
with Geoff
Hinton<o:p
class=""></o:p></p>
</div>
</div>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
<div class="">
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">Dear Geoff,
and interested
others,<o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">What, for
example, would
you make of a
system that
often drew the
red-hatted
hamster you
requested, and
perhaps a fifth
of the time gave
you utter
nonsense? Or
say one that you
trained to
create birds but
sometimes output
stuff like this:<o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><image001.png><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">One could <o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">a. avert
one’s eyes and
deem the
anomalous
outputs
irrelevant<o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">or<o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">b. wonder if
it might be
possible that
sometimes the
system gets the
right answer for
the wrong
reasons (eg
partial
historical
contingency),
and wonder
whether another
approach might
be indicated.<o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">Benchmarks
are harder than
they look; most
of the field has
come to
recognize that.
The Turing Test
has turned out
to be a lousy
measure of
intelligence,
easily gamed. It
has turned out
empirically that
the Winograd
Schema Challenge
did not measure
common sense as
well as Hector
might have
thought. (As it
happens, I am a
minor coauthor
of a very recent
review on this
very topic: <a
href="https://urldefense.com/v3/__https:/arxiv.org/abs/2201.02387__;!!IKRxdwAv5BmarQ!INA0AMmG3iD1B8MDtLfjWCwcBjxO-e-eM2Ci9KEO_XYOiIEgiywK-G_8j6L3bHA$"
target="_blank" moz-do-not-send="true" class="">https://arxiv.org/abs/2201.02387</a>)
But its conquest
in no way means
machines now
have common
sense; many
people from many
different
perspectives
recognize that
(including,
e.g., Yann
LeCun, who
generally tends
to be more
aligned with you
than with me).<o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">So: on the
goalpost of the
Winograd schema,
I was wrong, and
you can quote
me; but what you
said about me
and machine
translation
remains your
invention, and
it is
inexcusable that
you simply
ignored my 2019
clarification.
On the essential
goal of trying
to reach meaning
and
understanding, I
remain unmoved;
the problem
remains
unsolved. <o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">All of the
problems LLMs
have with
coherence,
reliability,
truthfulness,
misinformation,
etc stand
witness to that
fact. (Their
persistent
inability to
filter out toxic
and insulting
remarks stems
from the same.)
I am hardly the
only person in
the field to see
that progress on
any given
benchmark does
not inherently
mean that the
deep underlying
problems have
solved. You,
yourself, in
fact, have
occasionally
made that
point. <o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">With respect
to embeddings:
Embeddings are
very good for
natural language
<i class="">processing</i>;
but NLP is not
the same as NL<i
class="">U</i>
– when it comes
to <i class="">understanding</i>,
their worth is
still an open
question.
Perhaps they
will turn out to
be necessary;
they clearly
aren’t
sufficient. In
their extreme,
they might even
collapse into
being symbols,
in the sense of
uniquely
identifiable
encodings, akin
to the ASCII
code, in which a
specific set of
numbers stands
for a specific
word or concept.
(Wouldn’t that
be ironic?)<o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">(Your GLOM,
which as you
know I praised
publicly, is in
many ways an
effort to wind
up with
encodings that
effectively
serve as symbols
in exactly that
way, guaranteed
to serve as
consistent
representations
of specific
concepts.)<o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">Notably
absent from your
email is any
kind of apology
for
misrepresenting
my position.
It’s fine to say
that “many
people thirty
years ago once
thought X” and
another to say
“Gary Marcus
said X in 2015”,
when I didn’t. I
have
consistently
felt throughout
our interactions
that you have
mistaken me for
Zenon Pylyshyn;
indeed, you once
(at NeurIPS
2014) apologized
to me for having
made that error.
I am still not
he. <o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">Which maybe
connects to the
last point; if
you read my
work, you would
see thirty years
of arguments <i
class="">for</i> neural
networks, just
not in the way
that you want
them to exist. I
have ALWAYS
argued that
there is a role
for them;
characterizing
me as a person
“strongly opposed
to neural
networks” misses
the whole point
of my 2001 book,
which was
subtitled
“Integrating
Connectionism
and Cognitive
Science.”<o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">In the last
two decades or
so you have
insisted (for
reasons you have
never fully
clarified, so
far as I know)
on abandoning
symbol-manipulation,
but the reverse
is not the case:
I have <i
class="">never</i>
called for
dismissal of
neural networks,
but rather for
some hybrid
between the two
(as you yourself
contemplated in
1991); the point
of the 2001 book
was to
characterize
exactly where
multilayer
perceptrons
succeeded and
broke down, and
where symbols
could complement
them. It’s a
rhetorical trick
(which is what
the previous
thread was
about) to
pretend
otherwise.<o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">Gary<o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<blockquote
style="margin-top:5.0pt;margin-bottom:5.0pt"
class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;margin-bottom:12.0pt">On Feb 2, 2022, at
11:22,
Geoffrey
Hinton <<a
href="mailto:geoffrey.hinton@gmail.com" target="_blank"
moz-do-not-send="true"
class="">geoffrey.hinton@gmail.com</a>>
wrote:<o:p
class=""></o:p></p>
</blockquote>
</div>
<blockquote
style="margin-top:5.0pt;margin-bottom:5.0pt"
class="">
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><o:p
class=""></o:p></p>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">Embeddings
are just
vectors of
soft feature
detectors and
they are very
good for NLP.
The quote on
my webpage
from Gary's
2015 chapter
implies the
opposite.<o:p
class=""></o:p></p>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">A few decades
ago, everyone
I knew then
would have
agreed that
the ability to
translate a
sentence into
many different
languages was
strong
evidence that
you understood
it.<o:p
class=""></o:p></p>
</div>
</div>
</div>
</blockquote>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;margin-bottom:12.0pt"> <o:p class=""></o:p></p>
<blockquote
style="margin-top:5.0pt;margin-bottom:5.0pt"
class="">
<div class="">
<div class="">
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">But once
neural
networks could
do that, their
critics moved
the goalposts.
An exception
is Hector
Levesque who
defined the
goalposts more
sharply by
saying that
the ability to
get pronoun
references
correct in
Winograd
sentences is a
crucial test.
Neural nets
are improving
at that but
still have
some way to
go. Will Gary
agree that
when they can
get pronoun
references correct
in Winograd
sentences they
really do
understand? Or
does he want
to reserve the
right to
weasel out of
that too?<o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">Some people,
like Gary,
appear to be
strongly opposed
to neural
networks
because they
do not fit
their
preconceived
notions of how
the mind
should work.<o:p
class=""></o:p></p>
</div>
<div class="">
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">I believe
that any
reasonable
person would
admit that if
you ask a
neural net to
draw a picture
of a hamster
wearing a red
hat and it
draws such a
picture, it
understood the
request.<o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">Geoff<o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
</div>
</div>
</div>
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
<div class="">
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">On Wed, Feb
2, 2022 at
1:38 PM Gary
Marcus <<a
href="mailto:gary.marcus@nyu.edu" target="_blank" moz-do-not-send="true"
class="">gary.marcus@nyu.edu</a>>
wrote:<o:p
class=""></o:p></p>
</div>
<blockquote
style="border:none;border-left:solid
windowtext
1.0pt;padding:0in
0in 0in
6.0pt;margin-left:4.8pt;margin-top:5.0pt;margin-right:0in;margin-bottom:5.0pt;border-color:currentcolor
currentcolor
currentcolor
rgb(204,204,204)"
class="">
<div class="">
<div class="">
<div class="">
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt;font-family:"Times
New
Roman",serif"
class="">Dear
AI Hub, cc:
Steven Hanson
and Geoffrey
Hinton, and
the larger
neural network
community,</span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt"
class=""> </span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt;font-family:"Times
New
Roman",serif"
class="">There
has been a lot
of recent
discussion on
this list
about framing
and scientific
integrity.
Often the
first step in
restructuring
narratives is
to bully and
dehumanize
critics. The
second is to
misrepresent
their
position.
People in
positions of
power are
sometimes
tempted to do
this.</span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt"
class=""> </span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt;font-family:"Times
New
Roman",serif"
class="">The
Hinton-Hanson
interview that
you just
published is a
real-time
example of
just that. It
opens with a
needless and
largely
content-free
personal
attack on a
single scholar
(me), with the
explicit
intention of
discrediting
that person.
Worse, the
only
substantive
thing it says
is false.</span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt"
class=""> </span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt;font-family:"Times
New
Roman",serif"
class="">Hinton
says “In 2015
he [Marcus]
made a
prediction
that computers
wouldn’t be
able to do
machine
translation.”</span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt"
class=""> </span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt;font-family:"Times
New
Roman",serif"
class="">I
never said any
such thing. </span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt"
class=""> </span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt;font-family:"Times
New
Roman",serif"
class="">What
I predicted,
rather, was
that
multilayer
perceptrons,
as they
existed then,
would not (on
their own,
absent other
mechanisms) <i
class="">understand</i> language.
Seven years
later, they
still haven’t,
except in the
most
superficial
way. </span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt"
class=""> </span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt;font-family:"Times
New
Roman",serif"
class="">I
made no
comment
whatsoever
about machine
translation,
which I view
as a separate
problem,
solvable to a
certain degree
by
correspondance
without
semantics. </span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt"
class=""> </span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt;font-family:"Times
New
Roman",serif"
class="">I
specifically
tried to
clarify
Hinton’s
confusion in
2019, but,
disappointingly,
he has
continued to
purvey
misinformation
despite that
clarification.
Here is what I
wrote
privately to
him then,
which should
have put the
matter to
rest:</span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt"
class=""> </span><o:p
class=""></o:p></p>
</div>
<div
style="margin-left:27.0pt;font-stretch:normal"
class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt;font-family:"Times
New
Roman",serif"
class="">You
have taken a
single out of
context quote
[from 2015]
and
misrepresented
it. The quote,
which you have
prominently
displayed at
the bottom on
your own web
page, says:</span><o:p
class=""></o:p></p>
</div>
<div
style="margin-left:27.0pt;font-stretch:normal;min-height:22.9px"
class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt"
class=""> </span><o:p
class=""></o:p></p>
</div>
<div
style="margin-left:.75in;font-stretch:normal"
class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt;font-family:"Times
New
Roman",serif"
class="">Hierarchies
of features
are less
suited to
challenges
such as
language,
inference, and
high-level
planning. For
example, as
Noam Chomsky
famously
pointed out,
language is
filled with
sentences you
haven't seen
before. Pure
classifier
systems don't
know what to
do with such
sentences. The
talent of
feature
detectors --
in identifying
which member
of some
category
something
belongs to --
doesn't
translate into
understanding
novel sentences, in which each sentence has its own unique meaning. </span><o:p
class=""></o:p></p>
</div>
<div
style="margin-left:27.0pt;font-stretch:normal;min-height:22.9px"
class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt"
class=""> </span><o:p
class=""></o:p></p>
</div>
<div
style="margin-left:27.0pt;font-stretch:normal"
class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt;font-family:"Times
New
Roman",serif"
class="">It
does <i
class="">not</i> say
"neural nets
would not be
able to deal
with novel
sentences"; it
says that
hierachies of
features
detectors (on
their own, if
you read the
context of the
essay) would
have trouble <i
class="">understanding </i>novel sentences.
</span><o:p
class=""></o:p></p>
</div>
<div
style="margin-left:27.0pt;font-stretch:normal;min-height:22.9px"
class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt"
class=""> </span><o:p
class=""></o:p></p>
</div>
<div
style="margin-left:27.0pt;font-stretch:normal"
class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt;font-family:"Times
New
Roman",serif"
class="">Google
Translate does
yet not <i
class="">understand</i> the
content of the
sentences is
translates. It
cannot
reliably
answer
questions
about who did
what to whom,
or why, it
cannot infer
the order of
the events in
paragraphs, it
can't
determine the
internal
consistency of
those events,
and so forth.</span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt"
class=""> </span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt;font-family:"Times
New
Roman",serif"
class="">Since
then, a number
of scholars,
such as the
the
computational
linguist Emily
Bender, have
made similar
points, and
indeed current
LLM
difficulties
with
misinformation,
incoherence
and
fabrication
all follow
from these
concerns.
Quoting from
Bender’s
prizewinning
2020 ACL
article on the
matter with
Alexander
Koller, <a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__aclanthology.org_2020.acl-2Dmain.463.pdf&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=K-Vl6vSvzuYtRMi-s4j7mzPkNRTb-I6Zmf7rbuKEBpk&e="
target="_blank" moz-do-not-send="true" class="">https://aclanthology.org/2020.acl-main.463.pdf</a>,
also
emphasizing
issues of
understanding
and meaning:</span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt"
class=""> </span><o:p
class=""></o:p></p>
</div>
<div
style="margin-left:27.0pt;font-stretch:normal"
class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><i class=""><span
style="font-size:13.0pt;font-family:"Times New Roman",serif"
class="">The
success of the
large neural
language
models on many
NLP tasks is
exciting.
However, we
find that
these
successes
sometimes lead
to hype in
which these
models are
being
described as
“understanding”
language or
capturing
“meaning”. In
this position
paper, we
argue that a
system trained
only on form
has a priori
no way to
learn meaning.
.. a clear
understanding
of the
distinction
between form
and meaning
will help
guide the
field towards
better science
around natural
language
understanding. </span></i><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt"
class=""> </span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt;font-family:"Times
New
Roman",serif"
class="">Her
later article
with Gebru on
language
models
“stochastic
parrots” is in
some ways an
extension of
this point;
machine
translation
requires
mimicry, true
understanding
(which is what
I was
discussing in
2015) requires
something
deeper than
that. </span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt"
class=""> </span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt;font-family:"Times
New
Roman",serif"
class="">Hinton’s
intellectual
error here is
in equating
machine
translation
with the
deeper
comprehension
that robust
natural
language
understanding
will require;
as Bender and
Koller
observed, the
two appear not
to be the
same. (There
is a longer
discussion of
the relation
between
language
understanding
and machine
translation,
and why the
latter has
turned out to
be more
approachable
than the
former, in my
2019 book with
Ernest Davis).</span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt"
class=""> </span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt;font-family:"Times
New
Roman",serif"
class="">More
broadly,
Hinton’s
ongoing
dismissiveness
of research
from
perspectives
other than his
own (e.g.
linguistics)
have done the
field a
disservice. </span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt"
class=""> </span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt;font-family:"Times
New
Roman",serif"
class="">As
Herb Simon
once observed,
science does
not have to be
zero-sum.</span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt"
class=""> </span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt;font-family:"Times
New
Roman",serif"
class="">Sincerely,</span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt;font-family:"Times
New
Roman",serif"
class="">Gary
Marcus</span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt;font-family:"Times
New
Roman",serif"
class="">Professor
Emeritus</span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-size:13.0pt;font-family:"Times
New
Roman",serif"
class="">New
York
University</span><o:p
class=""></o:p></p>
</div>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;margin-bottom:12.0pt"> <o:p class=""></o:p></p>
<blockquote
style="margin-top:5.0pt;margin-bottom:5.0pt"
class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;margin-bottom:12.0pt">On Feb 2, 2022, at
06:12, AIhub
<<a
href="mailto:aihuborg@gmail.com"
target="_blank" moz-do-not-send="true" class="">aihuborg@gmail.com</a>>
wrote:<o:p
class=""></o:p></p>
</blockquote>
</div>
<blockquote
style="margin-top:5.0pt;margin-bottom:5.0pt"
class="">
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><o:p
class=""></o:p></p>
<div class="">
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">Stephen
Hanson in
conversation
with Geoff
Hinton<o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">In the latest
episode of
this video
series for <a
href="https://urldefense.proofpoint.com/v2/url?u=http-3A__AIhub.org&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=eOtzMh8ILIH5EF7K20Ks4Fr27XfNV_F24bkj-SPk-2A&e="
target="_blank" moz-do-not-send="true" class=""> AIhub.org</a>, Stephen
Hanson talks
to Geoff
Hinton about
neural
networks,
backpropagation,
overparameterization, digit recognition, voxel cells, syntax and
semantics,
Winograd
sentences, and
more.<o:p
class=""></o:p></p>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">You can watch
the
discussion,
and read the
transcript,
here:<br
class=""
clear="all">
<o:p class=""></o:p></p>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__aihub.org_2022_02_02_what-2Dis-2Dai-2Dstephen-2Dhanson-2Din-2Dconversation-2Dwith-2Dgeoff-2Dhinton_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=OY_RYGrfxOqV7XeNJDHuzE--aEtmNRaEyQ0VJkqFCWw&e="
target="_blank" moz-do-not-send="true" class="">https://aihub.org/2022/02/02/what-is-ai-stephen-hanson-in-conversation-with-geoff-hinton/</a><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-family:"Arial",sans-serif"
class="">About
AIhub: </span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-family:"Arial",sans-serif"
class="">AIhub
is a
non-profit
dedicated to
connecting the
AI community
to the public
by providing
free,
high-quality
information
through <a
href="https://urldefense.proofpoint.com/v2/url?u=http-3A__AIhub.org&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=eOtzMh8ILIH5EF7K20Ks4Fr27XfNV_F24bkj-SPk-2A&e="
target="_blank" moz-do-not-send="true" class=""> AIhub.org</a> (<a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__aihub.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=IKFanqeMi73gOiS7yD-X_vRx_OqDAwv1Il5psrxnhIA&e="
target="_blank" moz-do-not-send="true" class="">https://aihub.org/</a>).
We help
researchers
publish the
latest AI
news,
summaries of
their work,
opinion
pieces,
tutorials and
more. We are
supported by
many leading
scientific
organizations
in AI, namely
<a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__aaai.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=wBvjOWTzEkbfFAGNj9wOaiJlXMODmHNcoWO5JYHugS0&e="
target="_blank" moz-do-not-send="true" class=""> AAAI</a>, <a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__neurips.cc_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=3-lOHXyu8171pT_UE9hYWwK6ft4I-cvYkuX7shC00w0&e="
target="_blank" moz-do-not-send="true" class=""> NeurIPS</a>, <a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__icml.cc_imls_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=JJyjwIpPy9gtKrZzBMbW3sRMh3P3Kcw-SvtxG35EiP0&e="
target="_blank" moz-do-not-send="true" class=""> ICML</a>, <a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.journals.elsevier.com_artificial-2Dintelligence&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=eWrRCVWlcbySaH3XgacPpi0iR0-NDQYCLJ1x5yyMr8U&e="
target="_blank" moz-do-not-send="true" class=""> AIJ</a>/<a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.journals.elsevier.com_artificial-2Dintelligence&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=eWrRCVWlcbySaH3XgacPpi0iR0-NDQYCLJ1x5yyMr8U&e="
target="_blank" moz-do-not-send="true" class="">IJCAI</a>, <a
href="https://urldefense.proofpoint.com/v2/url?u=http-3A__sigai.acm.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=7rC6MJFaMqOms10EYDQwfnmX-zuVNhu9fz8cwUwiLGQ&e="
target="_blank" moz-do-not-send="true" class=""> ACM SIGAI</a>,
EurAI/AICOMM,
<a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__claire-2Dai.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=66ZofDIhuDba6Fb0LhlMGD3XbBhU7ez7dc3HD5-pXec&e="
target="_blank" moz-do-not-send="true" class=""> CLAIRE</a> and <a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.robocup.org__&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=bBI6GRq--MHLpIIahwoVN8iyXXc7JAeH3kegNKcFJc0&e="
target="_blank" moz-do-not-send="true" class=""> RoboCup</a>.</span><o:p
class=""></o:p></p>
</div>
<div class="">
<p
class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span
style="font-family:"Arial",sans-serif"
class="">Twitter:
@aihuborg</span><o:p
class=""></o:p></p>
</div>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</blockquote>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
</div>
<div
id="gmail-m_3776411903040420401DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2"
class="">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
<table
class="MsoNormalTable"
style="border:none;border-top:solid
windowtext
1.0pt;border-color:currentcolor
currentcolor"
cellpadding="0" border="1">
<tbody class="">
<tr class="">
<td
style="width:41.25pt;border:none;padding:9.75pt
.75pt .75pt .75pt"
class="" width="55">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e="
target="_blank"
moz-do-not-send="true" class=""><span style="text-decoration:none"
class=""><img
style="width:.4791in;height:.302in" id="_x0000_i1025"
src="https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif"
data-unique-identifier="" moz-do-not-send="true" class="" width="46"
height="29"
border="0"></span></a><o:p
class=""></o:p></p>
</td>
<td
style="width:352.5pt;border:none;padding:9.0pt
.75pt .75pt .75pt"
class="" width="470">
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;line-height:13.5pt"><span
style="font-size:10.0pt;font-family:"Arial",sans-serif;color:#41424E"
class="">Virus-free.
<a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e="
target="_blank" moz-do-not-send="true" class=""> <span
style="color:#4453EA"
class="">www.avast.com</span></a>
</span><o:p
class=""></o:p></p>
</td>
</tr>
</tbody>
</table>
</div>
</div>
</blockquote>
</div>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"> <o:p
class=""></o:p></p>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
</blockquote>
<div class="moz-signature">-- <br class="">
<span
id="cid:part51.0AC1D259.61E87F5D@rubic.rutgers.edu"><signature.png></span></div>
</div>
</div>
</blockquote>
</div>
<br class="">
</div>
</blockquote>
<div class="moz-signature">-- <br>
<img src="cid:part61.D989DC19.5F87F962@rubic.rutgers.edu"
border="0"></div>
</body>
</html>