Connectionists: **Submission deadline extended to Sept. 29** NeurIPS 2022 Gaze Meets ML Workshop

Alex Karargyris akarargyris at gmail.com
Sun Sep 18 04:05:23 EDT 2022


Dear all,

We have some important updates related to NeurIPS 2022 Gaze Meets ML
Workshop:
1) The submission deadline is extended by 1 week (i.e Thursday, September
29th - Anywhere on Earth <https://time.is/Anywhere_on_Earth>)
2) We are are excited and honored to have Prof. Jürgen Schmidhuber
<https://people.idsia.ch/~juergen/> give the opening remarks at the
workshop!

Please see the updated announcement below. Looking forward to meeting
everyone in New Orleans!

Sincerely,
Alexandros Karargyris on behalf of the organizing committee

**********************************************************************************

The 2022 Gaze Meets ML workshop in conjunction with NeurIPS 2022

**********************************************************************************

*Webpage: **https://gaze-meets-ml.github.io/*
<https://gaze-meets-ml.github.io/gaze-meets-ml/>

*Twitter Handle: **https://twitter.com/Gaze_Meets_ML*
<https://twitter.com/Gaze_Meets_ML>

*Submission site:* <https://cmt3.research.microsoft.com/OpenEDS2019>
*https://openreview.net/group?id=NeurIPS.cc/2022/Workshop/GMML*
<https://openreview.net/group?id=NeurIPS.cc/2022/Workshop/GMML>

*Submission deadline: **September 22nd 2022** September 29th, 2022*

*Date: December 3rd, 2022*

*Location: New Orleans Convention Center, New Orleans, LA*



*!!!NEWS!!!*

*-Submission deadline extended to September 29th, 2022*

*-Prof. Jürgen Schmidhuber will give the opening remarks at the workshop*
[image: jurgen.jpeg]



** *Overview* **

Eye gaze has proven to be a cost-efficient way to collect large-scale
physiological data that can reveal the underlying human attentional
patterns in real life workflows, and thus has long been explored as a
signal to directly measure human-related cognition in various domains
Physiological data (including but not limited to eye gaze) offer new
perception capabilities, which could be used in several ML domains, e.g.,
egocentric perception, embodiedAI, NLP, etc. They can help infer human
perception, intentions, beliefs, goals and other cognition properties that
are much needed for human-AI interactions and agent coordination. In
addition, large collections of eye-tracking data have enabled data-driven
modeling of human visual attention mechanisms, both for saliency or
scanpath prediction, with twofold advantages: from the neuroscientific
perspective to understand biological mechanisms better, from the AI
perspective to equip agents with the ability to mimic or predict human
behavior and improve interpretability and interactions.



With the emergence of immersive technologies, now more than any time there
is a need for experts of various backgrounds (e.g., machine learning,
vision, and neuroscience communities) to share expertise and contribute to
a deeper understanding of the intricacies of cost-efficient human
supervision signals (e.g., eye-gaze) and their utilization towards bridging
human cognition and AI in machine learning research and development. The
goal of this workshop is to bring together an active research community to
collectively drive progress in defining and addressing core problems in
gaze-assisted machine learning.







*** Call for Papers ***

We welcome submissions that present aspects of eye-gaze in regards to
cognitive science, psychophysiology and computer science, propose methods
on integrating eye gaze into machine learning, and application domains from
radiology, AR/VR, autonomous driving, etc. that introduce methods and
models utilizing eye gaze technology in their respective domains.

Topics of interest include but are not limited to the following:

●      Understanding the neuroscience of eye-gaze and perception.

●      State-of-the-art in incorporating machine learning and eye-tracking.

●      Data annotation and ML supervision with eye-gaze.

●      Attention mechanisms and their correlation with eye-gaze.

●      Methods for gaze estimation and prediction using machine learning.

●      Unsupervised ML using eye gaze information for feature
importance/selection.

●      Understanding human intention and goal inference.

●      Using saccadic vision for ML applications.

●      Use of gaze for human-AI interaction and agent coordination in
multi-agent environments.

●      Eye gaze used for AI, e.g., NLP, Computer Vision, RL, Explainable
AI, Embodied AI, Trustworthy AI.

●      Gaze applications in cognitive psychology, radiology, neuroscience,
AR/VR, autonomous cars, privacy, etc.



*** Submission Guidelines ***

Submissions must be written in English and must be sent in PDF format. Each
submitted paper must be no longer than nine (9) pages, excluding appendices
and references. Please refer to the NeurIPS2022 formatting instructions
<https://nips.cc/Conferences/2022/CallForPapers> for instructions regarding
formatting, templates, and policies. The submissions will be peer-reviewed
by the program committee and accepted papers will be presented as lightning
talks during the workshop.



Submit your paper at
https://openreview.net/group?id=NeurIPS.cc/2022/Workshop/GMML



*** Awards and Funding ***

Award prizes for best papers and travel awards for selected student
authors, with a focus on increasing diversity and inclusion.



*** Important dates for Workshop paper submission* **

●      Paper submission deadline: September 22, 2022 September 29th, 2022

●      Reviewing starts: September 26, 2022, September 30, 2022

●      Reviewing ends: October 10, 2022

●      Notification of acceptance: October 14, 2022

●      Workshop: December 3, 2022 (in person)



*** Organizing Committee ***

Ismini Lourentzou <https://isminoula.github.io/> (Virginia Tech)

Joy Tzung-yu Wu
<https://scholar.google.com/citations?user=03O8mIMAAAAJ&hl=en> (Stanford,
IBM Research)

Satyananda Kashyap
<https://researcher.watson.ibm.com/researcher/view.php?person=ibm-Satyananda.Kashyap>
(IBM Research)

Alexandros Karargyris <https://www.linkedin.com/in/alexandroskarargyris/>
(IHU Strasbourg)

Leo Antony Celi <https://imes.mit.edu/research-staff-prof/leo-anthony-celi/>
(MIT)

Ban Kawas
<https://www.re-work.co/events/trusted-ai-summit-2022/speakers/ban-kawas>
(Meta, Reality Labs Research)

Sachin Talathi <https://www.linkedin.com/in/sachin-t-0b8b608/> (Meta,
Reality Labs Research)



*** Keynote Speaker***

Since age 15 or so, the main goal of professor Jürgen Schmidhuber
<https://people.idsia.ch/~juergen/> has been to build a self-improving
Artificial Intelligence (AI) smarter than himself, then retire. His lab's Deep
Learning Neural Networks
<https://people.idsia.ch/~juergen/deeplearning.html> (NNs) based on ideas
published in the "Annus Mirabilis" 1990-1991
<https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html>
have
revolutionised machine learning and AI. In 2009, the CTC-trained
<https://sferics.idsia.ch/pub/juergen/icml2006.pdf> Long Short-Term Memory
(LSTM)
<https://people.idsia.ch/~juergen/deep-learning-miraculous-year-1990-1991.html#Sec.%204>
of
his team was the first recurrent NN to win international pattern
recognition competitions <https://people.idsia.ch/~juergen/handwriting.html>.
In 2010, his lab's fast and deep feedforward NNs on GPUs greatly
outperformed previous methods
<https://people.idsia.ch/~juergen/2010-breakthrough-supervised-deep-learning.html>,
without using any unsupervised pre-training, a popular deep learning
strategy that he pioneered in 1991
<https://people.idsia.ch/~juergen/very-deep-learning-1991.html>. In 2011, the
DanNet of his team
<https://people.idsia.ch/~juergen/DanNet-triggers-deep-CNN-revolution-2011.html>
was
the first feedforward NN to win computer vision contests
<https://people.idsia.ch/~juergen/computer-vision-contests-won-by-gpu-cnns.html>,
achieving superhuman performance
<https://people.idsia.ch/~juergen/superhumanpatternrecognition.html>. In
2012, they had the first deep NN to win a medical imaging contest
<https://people.idsia.ch/~juergen/first-time-deep-learning-won-medical-imaging-contest-september-2012.html>
(on
cancer detection). This deep learning revolution quickly spread from Europe
to North America and Asia, and attracted enormous interest from industry. By
the mid 2010s
<https://people.idsia.ch/~juergen/2010s-our-decade-of-deep-learning.html>,
his lab's NNs were on 3 billion devices, and used billions of times per day
through users of the world's most valuable public companies
<https://people.idsia.ch/~juergen/impact-on-most-valuable-companies.html>,
e.g., for greatly improved speech recognition on all Android smartphones,
greatly improved machine translation through Google Translate and Facebook
(over 4 billion LSTM-based translations per day), Apple's Siri and
Quicktype on all iPhones, the answers of Amazon's Alexa, and numerous other
applications. In May 2015, his team published the Highway Net
<https://people.idsia.ch/~juergen/highway-networks.html>, the first working
really deep feedforward NN with hundreds of layers—its open-gated version
called ResNet (Dec 2015) has become the most cited NN of the 21st century,
LSTM the most cited NN of the 20th
<https://people.idsia.ch/~juergen/most-cited-neural-nets.html> (Bloomberg
called LSTM the *arguably most commercial AI achievement*). His lab's NNs
are now heavily used in healthcare and medicine
<https://people.idsia.ch/~juergen/2010s-our-decade-of-deep-learning.html>,
helping to make human lives longer and healthier. His research group also
established the fields of mathematically rigorous universal AI
<https://people.idsia.ch/~juergen/unilearn.html> and recursive
self-improvement <https://people.idsia.ch/~juergen/goedelmachine.html>
in metalearning
machines that learn to learn
<https://people.idsia.ch/~juergen/metalearning.html> (since 1987). In 1990,
he introduced unsupervised generative adversarial neural networks that
fight each other in a minimax game
<https://people.idsia.ch/~juergen/artificial-curiosity-since-1990.html#sec1> to
implement artificial curiosity
<https://people.idsia.ch/~juergen/interest.html> (the famous GANs are
instances thereof). In 1991, he introduced neural fast weight programmers
<https://people.idsia.ch/~juergen/fast-weight-programmer-1991-transformer.html>
formally
equivalent to what's now called Transformers with linearized
self-attention. His formal theory of creativity & curiosity & fun
<https://people.idsia.ch/~juergen/creativity.html> explains art, science,
music, and humor. He also generalized algorithmic information theory
<https://people.idsia.ch/~juergen/kolmogorov.html> and the many-worlds
theory of physics <https://people.idsia.ch/~juergen/computeruniverse.html>,
and introduced the concept of Low-Complexity Art
<https://people.idsia.ch/~juergen/creativity.html>, the information age's
extreme form of minimal art. He is recipient of numerous awards, author of
about 400 peer-reviewed papers, and Chief Scientist of the company NNAISENSE
<https://nnaisense.com/>, which aims at building the first practical
general purpose AI. He is a frequent keynote speaker, and advising various
governments on AI strategies.





*** Contact* **

Organizing Committee gaze.neurips at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220918/7dce1a46/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: jurgen.jpeg
Type: image/jpeg
Size: 570600 bytes
Desc: not available
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220918/7dce1a46/attachment.jpeg>


More information about the Connectionists mailing list