<div dir="ltr"><font face="Sans Serif" style="color:rgba(0,0,0,0.87);font-size:14px">Dear all, </font><div style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><br></div><div style="color:rgba(0,0,0,0.87);font-size:14px"><font face="Sans Serif">We have some important updates related to </font><span style="font-size:small;color:rgb(34,34,34)">NeurIPS 2022 Gaze Meets ML Workshop:</span></div><div style="color:rgba(0,0,0,0.87);font-size:14px"><span style="font-size:small;color:rgb(34,34,34)">1) The submission deadline is extended by 1 week (i.e Thursday, September 29th - </span><span style="font-size:small;color:rgb(34,34,34)"><a href="https://time.is/Anywhere_on_Earth">Anywhere on Earth</a>)</span></div><div style="color:rgba(0,0,0,0.87);font-size:14px"><span style="font-size:small;color:rgb(34,34,34)">2) We ar</span><font face="Sans Serif">e are excited and honored to have </font><span style="font-size:small;color:rgb(34,34,34)"><a href="https://people.idsia.ch/~juergen/">Prof. Jürgen Schmidhuber</a> give the opening remarks at the workshop! </span></div><div style="color:rgba(0,0,0,0.87);font-size:14px"><span style="font-size:small;color:rgb(34,34,34)"><br></span></div><div style="color:rgba(0,0,0,0.87);font-size:14px"><span style="font-size:small;color:rgb(34,34,34)">Please see the updated announcement below. Looking forward to meeting everyone in New Orleans!</span></div><div style="color:rgba(0,0,0,0.87);font-size:14px"><span style="font-size:small;color:rgb(34,34,34)"><br></span></div><div style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><font face="Sans Serif">Sincerely,</font></div><div style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><font face="Sans Serif">Alexandros Karargyris on behalf of the organizing committee</font></div><div style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><font face="Sans Serif"><br></font></div><p dir="ltr" style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"></p><p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><b><u><span lang="EN">********************************************************************************</span></u></b></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN">The 2022 Gaze Meets ML workshop in conjunction
with NeurIPS 2022</span></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><u><span lang="EN">*****************************************************************<b>***************</b></span></u></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><b><span lang="EN">Webpage:
</span></b><span lang="EN"><a href="https://gaze-meets-ml.github.io/gaze-meets-ml/" style="color:blue"><b><span style="color:rgb(17,85,204);text-decoration-line:none">https://gaze-meets-ml.github.io/</span></b></a><b><span style="color:rgb(14,16,26)"></span></b></span></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><b><span lang="EN" style="color:rgb(14,16,26)">Twitter Handle: </span></b><span lang="EN"><a href="https://twitter.com/Gaze_Meets_ML" style="color:blue"><b><span style="color:rgb(17,85,204)">https://twitter.com/Gaze_Meets_ML</span></b></a><b><span style="color:rgb(14,16,26)"> </span></b></span></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><b><span lang="EN" style="color:rgb(14,16,26)">Submission site:</span></b><span lang="EN"><a href="https://cmt3.research.microsoft.com/OpenEDS2019" style="color:blue"><b><span style="color:rgb(14,16,26);text-decoration-line:none"> </span></b></a><a href="https://openreview.net/group?id=NeurIPS.cc/2022/Workshop/GMML" style="color:blue"><b><span style="color:rgb(17,85,204);text-decoration-line:none">https://openreview.net/group?id=NeurIPS.cc/2022/Workshop/GMML</span></b></a><b><span style="color:rgb(14,16,26)"> </span></b></span></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><b><span lang="EN">Submission
deadline: </span></b><b><s><span lang="EN" style="font-size:10.5pt;line-height:115%">September 22nd 2022</span></s></b><b><span lang="EN" style="font-size:10.5pt;line-height:115%"> </span><span lang="EN" style="color:red">September 29<sup>th</sup>,
2022</span></b></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><b><span lang="EN">Date:
December 3rd, 2022</span></b></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><b><span lang="EN">Location:
New Orleans Convention Center, New Orleans, LA</span></b></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><b><span lang="EN" style="font-size:14pt;line-height:115%"> </span></b></p>

<p class="MsoNormal" align="center" style="text-align:center;margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><b><span lang="EN" style="font-size:14pt;line-height:115%;color:red">!!!NEWS!!!</span></b></p>

<p class="MsoNormal" align="center" style="text-align:center;margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><b><span lang="EN" style="font-size:14pt;line-height:115%;color:red">-Submission
deadline extended to September 29<sup>th</sup>, 2022</span></b></p>

<p class="MsoNormal" align="center" style="text-align:center;margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><b><span lang="EN" style="font-size:14pt;line-height:115%;color:red">-Prof. Jürgen Schmidhuber will give the opening remarks at the
workshop</span></b></p><div style="text-align:center"><img src="cid:ii_l871o7sh0" alt="jurgen.jpeg" width="216" height="122" style="margin-right: 0px;"></div>

<p class="MsoNormal" style="text-align:justify;margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN"> </span></p>

<p class="MsoNormal" style="text-align:justify;margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN">** <b>Overview</b> **</span></p>

<p class="MsoNormal" style="text-align:justify;margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN">Eye gaze has proven
to be a cost-efficient way to collect large-scale physiological data that can
reveal the underlying human attentional patterns in real life workflows, and
thus has long been explored as a signal to directly measure human-related cognition
in various domains  Physiological data
(including but not limited to eye gaze) offer new perception capabilities,
which could be used in several ML domains, e.g., egocentric perception,
embodiedAI, NLP, etc. They can help infer human perception, intentions,
beliefs, goals and other cognition properties that are much needed for human-AI
interactions and agent coordination. In addition, large collections of
eye-tracking data have enabled data-driven modeling of human visual attention
mechanisms, both for saliency or scanpath prediction, with twofold advantages:
from the neuroscientific perspective to understand biological mechanisms
better, from the AI perspective to equip agents with the ability to mimic or
predict human behavior and improve interpretability and interactions.</span></p>

<p class="MsoNormal" style="text-align:justify;margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN"> </span></p>

<p class="MsoNormal" style="text-align:justify;margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN">With the emergence
of immersive technologies, now more than any time there is a need for experts
of various backgrounds (e.g., machine learning, vision, and neuroscience
communities) to share expertise and contribute to a deeper understanding of the
intricacies of cost-efficient human supervision signals (e.g., eye-gaze) and
their utilization towards bridging human cognition and AI in machine learning
research and development. The goal of this workshop is to bring together an
active research community to collectively drive progress in defining and
addressing core problems in gaze-assisted machine learning.</span></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN"> </span></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><b><span lang="EN"> </span></b></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><b><span lang="EN"> </span></b></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><b><span lang="EN">** Call
for Papers </span></b><span lang="EN">**</span></p>

<p class="MsoNormal" style="text-align:justify;margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN">We welcome
submissions that present aspects of eye-gaze in regards to cognitive science,
psychophysiology and computer science, propose methods on integrating eye gaze
into machine learning, and application domains from radiology, AR/VR,
autonomous driving, etc. that introduce methods and models utilizing eye gaze
technology in their respective domains. </span></p>

<p class="MsoNormal" style="text-align:justify;margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN">Topics of interest
include but are not limited to the following: </span></p>

<p class="MsoNormal" style="margin:0in 0in 0in 0.5in;text-align:justify;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN">●<span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">     
</span></span><span dir="LTR"></span><span lang="EN">Understanding
the neuroscience of eye-gaze and perception. </span></p>

<p class="MsoNormal" style="margin:0in 0in 0in 0.5in;text-align:justify;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN">●<span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">     
</span></span><span dir="LTR"></span><span lang="EN">State-of-the-art
in incorporating machine learning and eye-tracking.</span></p>

<p class="MsoNormal" style="margin:0in 0in 0in 0.5in;text-align:justify;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN">●<span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">     
</span></span><span dir="LTR"></span><span lang="EN">Data
annotation and ML supervision with eye-gaze.</span></p>

<p class="MsoNormal" style="margin:0in 0in 0in 0.5in;text-align:justify;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN">●<span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">     
</span></span><span dir="LTR"></span><span lang="EN">Attention
mechanisms and their correlation with eye-gaze.</span></p>

<p class="MsoNormal" style="margin:0in 0in 0in 0.5in;text-align:justify;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN">●<span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">     
</span></span><span dir="LTR"></span><span lang="EN">Methods for
gaze estimation and prediction using machine learning. </span></p>

<p class="MsoNormal" style="margin:0in 0in 0in 0.5in;text-align:justify;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN">●<span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">     
</span></span><span dir="LTR"></span><span lang="EN">Unsupervised
ML using eye gaze information for feature importance/selection.</span></p>

<p class="MsoNormal" style="margin:0in 0in 0in 0.5in;text-align:justify;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN">●<span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">     
</span></span><span dir="LTR"></span><span lang="EN">Understanding
human intention and goal inference. </span></p>

<p class="MsoNormal" style="margin:0in 0in 0in 0.5in;text-align:justify;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN">●<span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">     
</span></span><span dir="LTR"></span><span lang="EN">Using
saccadic vision for ML applications. </span></p>

<p class="MsoNormal" style="margin:0in 0in 0in 0.5in;text-align:justify;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN">●<span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">     
</span></span><span dir="LTR"></span><span lang="EN">Use of gaze
for human-AI interaction and agent coordination in multi-agent environments.</span></p>

<p class="MsoNormal" style="margin:0in 0in 0in 0.5in;text-align:justify;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN">●<span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">     
</span></span><span dir="LTR"></span><span lang="EN">Eye gaze used
for AI, e.g., NLP, Computer Vision, RL, Explainable AI, Embodied AI,
Trustworthy AI.</span></p>

<p class="MsoNormal" style="margin:0in 0in 0in 0.5in;text-align:justify;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN">●<span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">     
</span></span><span dir="LTR"></span><span lang="EN">Gaze
applications in cognitive psychology, radiology, neuroscience, AR/VR,
autonomous cars, privacy, etc.</span></p>

<p class="MsoNormal" style="text-align:justify;margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN"> </span></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><b><span lang="EN">**
Submission Guidelines </span></b><span lang="EN">**</span></p>

<p class="MsoNormal" style="text-align:justify;margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN">Submissions must be
written in English and must be sent in PDF format. Each submitted paper must be
no longer than<span style="color:rgb(14,16,26)"> nine (9) pages, e</span>xcluding
appendices and references. Please refer to the<a href="https://nips.cc/Conferences/2022/CallForPapers" style="color:blue"><span style="color:rgb(17,85,204)">
NeurIPS2022 formatting instructions</span></a> for instructions regarding
formatting, templates, and policies. The submissions will be peer-reviewed by
the program committee<span style="color:rgb(14,16,26)"> and accepted papers will be
presented as lightning talks during the workshop. </span></span></p>

<p class="MsoNormal" style="text-align:justify;margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN"> </span></p>

<p class="MsoNormal" style="text-align:justify;margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN">Submit your paper
at <a href="https://openreview.net/group?id=NeurIPS.cc/2022/Workshop/GMML" style="color:blue"><span style="color:rgb(17,85,204)">https://openreview.net/group?id=NeurIPS.cc/2022/Workshop/GMML</span></a>
<b><span style="color:red"> </span></b></span></p>

<p class="MsoNormal" style="line-height:137%;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0in;font-size:11pt;font-family:Arial,sans-serif"><b><span lang="EN"> </span></b></p>

<p class="MsoNormal" style="line-height:137%;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;margin:0in;font-size:11pt;font-family:Arial,sans-serif"><b><span lang="EN" style="color:black">** Awards and Funding **</span><span lang="EN"></span></b></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN">Award prizes for best papers and travel awards
for selected student authors, with a focus on increasing diversity and
inclusion.</span></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><b><span lang="EN"> </span></b></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><b><span lang="EN">**
Important dates for Workshop paper submission</span></b><span lang="EN"> **</span></p>

<p class="MsoNormal" style="margin:0in 0in 0in 0.5in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><s><span lang="EN">●<span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">     
</span></span></s><span dir="LTR"></span><span lang="EN">Paper
submission deadline: <s>September 22, 2022</s> <span style="color:red">September
29<sup>th</sup>, 2022<s></s></span></span></p>

<p class="MsoNormal" style="margin:0in 0in 0in 0.5in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN">●<span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">     
</span></span><span dir="LTR"></span><span lang="EN" style="color:rgb(36,41,46)">Reviewing starts: <s>September 26, 2022</s>, </span><span lang="EN" style="color:red">September 30, 2022</span><span lang="EN"></span></p>

<p class="MsoNormal" style="margin:0in 0in 0in 0.5in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN">●<span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">     
</span></span><span dir="LTR"></span><span lang="EN" style="color:rgb(36,41,46)">Reviewing ends: October 10, 2022</span><span lang="EN"></span></p>

<p class="MsoNormal" style="margin:0in 0in 0in 0.5in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN">●<span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">     
</span></span><span dir="LTR"></span><span lang="EN">Notification
of acceptance: October 14, 2022</span></p>

<p class="MsoNormal" style="margin:0in 0in 0in 0.5in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN">●<span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:7pt;line-height:normal;font-family:"Times New Roman"">     
</span></span><span dir="LTR"></span><span lang="EN">Workshop:
December 3, 2022 (in person)</span></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN"> </span></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><b><span lang="EN">**
Organizing Committee </span></b><span lang="EN">**</span></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN"><a href="https://isminoula.github.io/" style="color:blue"><span style="color:rgb(17,85,204)">Ismini Lourentzou</span></a> (Virginia Tech)</span></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN"><a href="https://scholar.google.com/citations?user=03O8mIMAAAAJ&hl=en" style="color:blue"><span style="color:rgb(17,85,204)">Joy Tzung-yu Wu</span></a> (Stanford, IBM Research)</span></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN"><a href="https://researcher.watson.ibm.com/researcher/view.php?person=ibm-Satyananda.Kashyap" style="color:blue"><span style="color:rgb(17,85,204)">Satyananda Kashyap</span></a> (IBM Research)</span></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN"><a href="https://www.linkedin.com/in/alexandroskarargyris/" style="color:blue"><span style="color:rgb(17,85,204)">Alexandros Karargyris</span></a> (IHU Strasbourg)</span></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN"><a href="https://imes.mit.edu/research-staff-prof/leo-anthony-celi/" style="color:blue"><span style="color:rgb(17,85,204)">Leo Antony Celi</span></a> (MIT)</span></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN"><a href="https://www.re-work.co/events/trusted-ai-summit-2022/speakers/ban-kawas" style="color:blue"><span style="color:rgb(17,85,204)">Ban Kawas</span></a> (Meta, Reality Labs Research)</span></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN"><a href="https://www.linkedin.com/in/sachin-t-0b8b608/" style="color:blue"><span style="color:rgb(17,85,204)">Sachin
Talathi</span></a> (Meta, Reality Labs Research)</span></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN"> </span></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><b><span lang="EN">** Keynote Speaker**</span></b></p>

<p class="MsoNormal" style="text-align:justify;margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">Since age 15 or so, the main goal of professor </span><span lang="EN"><a href="https://people.idsia.ch/~juergen/" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">Jürgen Schmidhuber</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"> has been to build a self-improving
Artificial Intelligence (AI) smarter than himself, then retire. His lab's </span><a href="https://people.idsia.ch/~juergen/deeplearning.html" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">Deep Learning Neural Networks</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"> (NNs) based on ideas published in
the </span><a href="https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">"Annus
Mirabilis" 1990-1991</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"> have
revolutionised machine learning and AI. In 2009, the </span><a href="https://sferics.idsia.ch/pub/juergen/icml2006.pdf" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">CTC-trained</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"> </span><a href="https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%204" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">Long Short-Term
Memory (LSTM)</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"> of
his team was </span><a href="https://people.idsia.ch/~juergen/handwriting.html" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">the first recurrent NN to win
international pattern recognition competitions</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">. In 2010, his lab's </span><a href="https://people.idsia.ch/~juergen/2010-breakthrough-supervised-deep-learning.html" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">fast and deep
feedforward NNs on GPUs greatly outperformed previous methods</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">, without using any </span><a href="https://people.idsia.ch/~juergen/very-deep-learning-1991.html" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">unsupervised
pre-training, a popular deep learning strategy that he pioneered in 1991</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">. In 2011, </span><a href="https://people.idsia.ch/~juergen/DanNet-triggers-deep-CNN-revolution-2011.html" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">the DanNet of his
team</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"> was the first
feedforward NN to win </span><a href="https://people.idsia.ch/~juergen/computer-vision-contests-won-by-gpu-cnns.html" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">computer vision
contests</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">,
achieving </span><a href="https://people.idsia.ch/~juergen/superhumanpatternrecognition.html" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">superhuman
performance</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">. In 2012,
they had the </span><a href="https://people.idsia.ch/~juergen/first-time-deep-learning-won-medical-imaging-contest-september-2012.html" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">first deep NN to
win a medical imaging contest</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"> (on cancer detection). This deep learning revolution quickly
spread from Europe to North America and Asia, and attracted enormous interest
from industry. </span><a href="https://people.idsia.ch/~juergen/2010s-our-decade-of-deep-learning.html" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">By the mid 2010s</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">, his lab's NNs were </span><a href="https://people.idsia.ch/~juergen/impact-on-most-valuable-companies.html" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">on 3 billion
devices, and used billions of times per day through users of the world's most
valuable public companies</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">,
e.g., for greatly improved speech recognition on all Android smartphones,
greatly improved machine translation through Google Translate and Facebook
(over 4 billion LSTM-based translations per day), Apple's Siri and Quicktype on
all iPhones, the answers of Amazon's Alexa, and numerous other applications. In
May 2015, his team published the </span><a href="https://people.idsia.ch/~juergen/highway-networks.html" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">Highway Net</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">, the first working really deep
feedforward NN with hundreds of layers—its open-gated version called ResNet
(Dec 2015) has become </span><a href="https://people.idsia.ch/~juergen/most-cited-neural-nets.html" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">the most cited NN
of the 21st century, LSTM the most cited NN of the 20th</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"> (Bloomberg called LSTM the <em>arguably most commercial AI achievement</em>).
His lab's NNs are now </span><a href="https://people.idsia.ch/~juergen/2010s-our-decade-of-deep-learning.html" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">heavily used in
healthcare and medicine</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">,
helping to make human lives longer and healthier. His research group also
established the fields of </span><a href="https://people.idsia.ch/~juergen/unilearn.html" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">mathematically rigorous universal AI</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"> and </span><a href="https://people.idsia.ch/~juergen/goedelmachine.html" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">recursive self-improvement</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"> in </span><a href="https://people.idsia.ch/~juergen/metalearning.html" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">metalearning machines that learn to
learn</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"> (since
1987). In 1990, he introduced </span><a href="https://people.idsia.ch/~juergen/artificial-curiosity-since-1990.html#sec1" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">unsupervised
generative adversarial neural networks that fight each other in a minimax game</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"> to implement </span><a href="https://people.idsia.ch/~juergen/interest.html" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">artificial curiosity</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"> (the famous GANs are instances
thereof). In 1991, he introduced </span><a href="https://people.idsia.ch/~juergen/fast-weight-programmer-1991-transformer.html" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">neural fast weight
programmers</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"> formally
equivalent to what's now called Transformers with linearized
self-attention. His </span><a href="https://people.idsia.ch/~juergen/creativity.html" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">formal theory of creativity &
curiosity & fun</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"> explains
art, science, music, and humor. He also </span><a href="https://people.idsia.ch/~juergen/kolmogorov.html" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">generalized algorithmic information
theory</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"> and
the </span><a href="https://people.idsia.ch/~juergen/computeruniverse.html" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">many-worlds theory
of physics</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">, and
introduced the concept of </span><a href="https://people.idsia.ch/~juergen/creativity.html" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">Low-Complexity Art</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">, the information age's extreme form of
minimal art. He is recipient of numerous awards, author of about 400
peer-reviewed papers, and Chief Scientist of the company </span><a href="https://nnaisense.com/" target="_blank" style="color:blue"><span style="color:rgb(17,85,204);background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">NNAISENSE</span></a><span style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">, which aims at building the first practical general purpose AI. He is a
frequent keynote speaker, and advising various governments on AI strategies.</span></span></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN"> </span></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN"> </span></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><b><span lang="EN">**
Contact</span></b><span lang="EN"> **</span></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN">Organizing Committee <a href="mailto:gaze.neurips@gmail.com" style="color:blue"><span style="color:rgb(17,85,204)">gaze.neurips@gmail.com</span></a>
</span></p>

<p class="MsoNormal" style="margin:0in;line-height:115%;font-size:11pt;font-family:Arial,sans-serif"><span lang="EN"> </span></p></div>