<div dir="ltr"><div>The deadline to submit your attention-related papers has been extended to **Oct 3**!<br></div><div><br></div><div class="gmail_quote"><div dir="ltr"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">On behalf of the co-organizers, we would like to invite you to submit your work to our NeurIPS workshop on “</span><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">All things Attention:</span><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> </span><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Bridging Different Perspectives on Attention”</span><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">. The details of the workshop and submission instructions are as follows:</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><br></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">The Thirty Sixth Conference on Neural Information Processing Systems (NeurIPS)</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Dec 2, 2022</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">NeurIPS 2022 is a hybrid Conference</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><a href="https://attention-learning-workshop.github.io/" style="text-decoration-line:none" target="_blank"><span style="font-size:11pt;font-family:Arial;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">https://attention-learning-workshop.github.io/</span></a></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">The All Things Attention workshop aims to foster connections across disparate academic communities that conceptualize "Attention" such as Neuroscience, Psychology, Machine Learning, and Human-Computer Interaction. Workshop topics of interest include (but are not limited to):</span></p><br><ol style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="margin-left:15px;list-style-type:upper-roman;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">Relationships between biological and artificial attention</span></p></li><ol style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="margin-left:15px;list-style-type:upper-alpha;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">What are the connections between different forms of attention in the human brain and present deep neural network architectures? </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:upper-alpha;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">Can the anatomy of human attention models provide useful insights to researchers designing architectures for artificial systems? </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:upper-alpha;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">Given the same task and learning objective, do machines learn attention mechanisms that are different from humans? </span></p></li></ol></ol><ol style="margin-top:0px;margin-bottom:0px" start="2"><li dir="ltr" style="margin-left:15px;list-style-type:upper-roman;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">Attention for reinforcement learning and decision making</span></p></li><ol style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="margin-left:15px;list-style-type:upper-alpha;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">How have reinforcement learning agents leveraged attention in decision making?</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:upper-alpha;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">Do decision-making agents today have implicit or explicit formalisms of attention?</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:upper-alpha;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">How can AI agents build notions of attention without explicitly baked in notions of attention?</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:upper-alpha;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">Can attention significantly enable AI agents to scale e.g. through gains in sample efficiency, and generalization?</span></p></li></ol><li dir="ltr" style="margin-left:15px;list-style-type:upper-roman;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">Benefits and formulation of attention mechanisms for continual / lifelong learning</span></p></li><ol style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="margin-left:15px;list-style-type:upper-alpha;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">How can continual learning agents optimize for retention of knowledge for tasks that it already learned? </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:upper-alpha;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">How can the amount of interference between different inputs be controlled via attention? </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:upper-alpha;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">How does the executive control of attention evolve with learning in humans? </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:upper-alpha;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">How can we study the development of attentional systems in infancy and childhood to better understand how attention can be learned?</span></p></li></ol><li dir="ltr" style="margin-left:15px;list-style-type:upper-roman;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">Attention as a tool for interpretation and explanation</span></p></li><ol style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="margin-left:15px;list-style-type:upper-alpha;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">How have researchers leveraged attention as a visualization tool?</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:upper-alpha;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">What are the common approaches when using attention as a tool for interpretability in AI? </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:upper-alpha;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">What are the major bottlenecks and common pitfalls in leveraging attention as a key tool for explaining the decisions of AI agents?</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:upper-alpha;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">How can we do better?</span></p></li></ol><li dir="ltr" style="margin-left:15px;list-style-type:upper-roman;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">The role of attention in human-computer interaction and human-robot interaction</span></p></li><ol style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="margin-left:15px;list-style-type:upper-alpha;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">How do we detect aspects of human attention during interactions, from sensing to processing to representations?</span><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline"> </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:upper-alpha;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">What systems benefit from human attention modeling, and how do they use these models?</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:upper-alpha;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">How can systems influence a user’s attention, and what systems benefit from this capability?</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:upper-alpha;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">How can a system communicate or simulate its own attention (humanlike or algorithmic) in an interaction, and to what benefit?</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:upper-alpha;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">How do attention models affect different applications, like collaboration or assistance, in different domains, like autonomous vehicles and driver assistance systems, learning from demonstration, joint attention in collaborative tasks, social interaction, etc.?</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:upper-alpha;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">How should researchers thinking about attention in different biological and computational fields organize the collection of human gaze data sets, modeling gaze behaviors, and utilizing gaze information in various applications for knowledge transfer and cross-pollination of ideas?</span></p></li></ol><li dir="ltr" style="margin-left:15px;list-style-type:upper-roman;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">Attention mechanisms in Deep Neural Network (DNN) architectures</span></p></li><ol style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="margin-left:15px;list-style-type:upper-alpha;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">How does attention in DNN such as transformers relate to existing formalisms of attention in cogsci/psychology? </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:upper-alpha;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">Do we have a concrete understanding of how and if self-attention in transformers contributes to its vast success in recent models such as GPT2, GPT3, DALLE.? </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:upper-alpha;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">Can our understanding of attention from other fields inform the progress we have achieved in recent breakthroughs?</span></p></li></ol></ol><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(26,115,232);font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">SUBMISSION INSTRUCTIONS</span></p><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">We invite you to submit papers (up to 9 pages for long papers and up to 5 pages for short papers, excluding references and appendix) in the </span><a href="https://neurips.cc/Conferences/2022/PaperInformation/StyleFiles" style="text-decoration-line:none" target="_blank"><span style="font-size:11pt;font-family:Arial;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">NeurIPS 2022 format.</span></a><span style="font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> All submissions will be managed through OpenReview (</span><a href="https://openreview.net/group?id=NeurIPS.cc/2022/Workshop/Attention" style="text-decoration-line:none" target="_blank"><span style="font-size:11pt;font-family:Arial;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">submission website</span></a><span style="font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">). The final submission including main paper, references and appendix should not exceed 12 pages. Supplementary Materials uploads are to only be used optionally for extra videos/code/data/figures and should be uploaded separately in the submission website.</span></p><br><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">The review process is double-blind so the submission should be anonymized. Accepted work will be presented as posters during the workshop, and select contributions will be invited to give spotlight talks during the workshop. Each accepted work entering the poster sessions will have an accompanying pre-recorded 5-minute video. Please note that at least one coauthor of each accepted paper will be expected to have a NeurIPS conference registration and participate in one of the poster sessions. </span></p><br><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Submissions will be evaluated based on novelty, rigor, and relevance to the theme of the workshop. Both empirical and theoretical contributions are welcome. Submissions should not have previously appeared in a journal or conference (including accepted papers to NeurIPS 2022) and should not be submitted to another NeurIPS workshop. Submissions must adhere to the NeurIPS Code of Conduct.</span></p><br><br><p dir="ltr" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">The focus of the work should relate to the list of the topics specified below. The review process will be double-blind and accepted submissions will be presented as virtual talks or posters. There will be no proceedings for this workshop, however, authors can opt to have their abstracts/papers posted on the workshop website.</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">We encourage submissions on the following topics from </span><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">the focus of bridging different perspectives on attention</span><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">:</span></p><ul style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">Relationships between biological and artificial attention</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">Attention for reinforcement learning and decision making</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">Benefits and formulation of attention mechanisms for continual / lifelong learning</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">Attention as a tool for interpretation and explanation</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">The role of attention in human-computer interaction and human-robot interaction</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" role="presentation" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline">Attention mechanisms in Deep Neural Network (DNN) architectures</span></p></li></ul><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(33,33,33);font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Please submit your papers via the following link: </span><a href="https://openreview.net/group?id=NeurIPS.cc/2022/Workshop/Attention" style="text-decoration-line:none" target="_blank"><span style="font-size:11pt;font-family:Arial;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">submission website</span></a></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(26,115,232);font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">IMPORTANT DATES</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">*</span><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> Submission deadline: Oct 3 <strike>Sep 15, 2022</strike> at 11:59PM (AOE) </span><a href="https://openreview.net/group?id=NeurIPS.cc/2022/Workshop/Attention" style="text-decoration-line:none" target="_blank"><span style="font-size:11pt;font-family:Arial;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">submission website</span></a><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> </span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">* Camera-ready (final) paper deadline: Nov 25, 2022 at 11:59PM (Anywhere on earth)</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">* Workshop: Dec 2, 2022</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(26,115,232);font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">CONFIRMED SPEAKERS & PANELISTS</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Speakers:</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Pieter Roelfsema (Netherlands Institute for Neuroscience)</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">James Whittington (University of Oxford)</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Ida Momennejad (Microsoft Research)</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Erin Grant (UC Berkeley)</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Henny Admoni (Carnegie Mellon University)</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Tobias Gerstenberg (Stanford University)</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Vidhya Navalpakkam (Google Research)</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Shalini De Mello (NVIDIA)</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Panelists:</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">David Ha (Google Brain)</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Pieter Roelfsema (Netherlands Institute for Neuroscience)</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">James Whittington (University of Oxford)</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Ida Momennejad (Microsoft Research)</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Henny Admoni (Carnegie Mellon University)</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Tobias Gerstenberg (Stanford University)</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Shalini De Mello (NVIDIA)</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Ramakrishna Vedantam (Meta AI Research)</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Megan deBettencourt (University of Chicago)</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Cyril Zhang (Microsoft Research)</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><br></span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(26,115,232);font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">ORGANIZERS</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Akanksha Saran (Microsoft Research, NYC)</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Khimya Khetarpal (McGill University, Mila Montreal)</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Reuben Aronson (Carnegie Mellon University)</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Abhijat Biswas (Carnegie Mellon University)</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Ruohan Zhang (Stanford University)</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Grace Lindsay (University College London, New York University)</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Scott Neikum (University of Texas at Austin, University of Massachusetts)</span></p><br><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(26,115,232);font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">REGISTRATION</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Participants should refer to the NeurIPS 2022 website (</span><a href="https://neurips.cc/Conferences/2022/Dates" style="text-decoration-line:none" target="_blank"><span style="font-size:11pt;font-family:Arial;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">https://neurips.cc/Conferences/2022/Dates</span></a><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">) for information on how to register.</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(26,115,232);font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">CONTACT</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Please reach out to us at </span><a href="mailto:attention-workshop@googlegroups.com" style="text-decoration-line:none" target="_blank"><span style="font-size:11pt;font-family:Arial;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">attention-workshop@googlegroups.com</span></a><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> </span><span style="font-size:11pt;font-family:Arial;color:rgb(38,50,56);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">if you have any questions. We look forward to receiving your submissions!</span></p><br><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Kind Regards,</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Workshop Organizers</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Arial;color:rgb(0,0,0);font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">All things Attention- Bridging different perspectives on attention</span></p></div>
</div><br clear="all"><br><br></div>