<div dir="ltr"><i style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif"><font size="1">*apologies if you have received multiple copies of this email</font></i><br style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><div style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">****************************************************************<br><font size="5">ICMI 2022</font><br>24th ACM International Conference on Multimodal Interaction<br></div><div style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><a href="https://icmi.acm.org/2022/" rel="nofollow" target="_blank" style="color:rgb(26,115,232);text-decoration-line:none">https://icmi.acm.org/2022/</a><br>7-11 Nov 2022, Bengaluru, India<br>*****************************************************************</div><div style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><br></div><div style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><br></div><font size="5" style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif">GENEA Challenge</font><br style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><span style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">The GENEA Challenge 2022: Full-body speech-driven gesture generation</span><div style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><a href="https://icmi.acm.org/2022/grand-challenges/" rel="nofollow" target="_blank" style="color:rgb(26,115,232);text-decoration-line:none">https://icmi.acm.org/2022/grand-challenges/</a><br></div><div style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><br></div><div style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><div>The GENEA (Generation and Evaluation of Non-verbal Behaviour for Embodied Agents) Grand Challenge 2022 aims at bringing together researchers that use different methods for non-verbal-behaviour generation and evaluation, and hopes to stimulate the discussions on how to improve both the generation methods and the evaluation of the results.<br></div><div><br></div></div><div style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><div>The challenge topic is speech-driven gesture generation. Participants are provided a large, common dataset of speech (audio+aligned text) and 3D motion to develop their systems, and then use these systems to generate motion on given test inputs. The generated motion clips are rendered onto a common virtual agent and evaluated for aspects such as motion quality and appropriateness in a crowdsourced user study. More details will be available when the dataset is released. You can refer to the previous challenge <a href="https://genea-workshop.github.io/2020/#gesture-generation-challenge" rel="nofollow" target="_blank" style="color:rgb(26,115,232);text-decoration-line:none">here</a> (be aware that the challenge dataset is different from the previous one).</div><div><br></div></div><div style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px">More details about the challenge can be found here:  <a href="https://genea-workshop.github.io/2022/challenge/" rel="nofollow" target="_blank" style="color:rgb(26,115,232);text-decoration-line:none">https://genea-workshop.github.io/2022/challenge/</a></div><div style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><div><div><br></div><div><div>----------------------------------------------------------------------------------------------------------------------------------------------</div><div><br></div><div><font size="5">Call for Workshop Papers</font></div></div><div><a href="https://icmi.acm.org/2022/workshops/" rel="nofollow" target="_blank" style="color:rgb(26,115,232);text-decoration-line:none">https://icmi.acm.org/2022/workshops/</a> <br></div><div><br></div><div>The 24th International Conference on Multimodal Interaction (ICMI 2022) invites papers for its focussed workshops.  Visit the individual workshop webpage given below to know more. <br></div><a href="https://icmi.acm.org/2022/workshops/#1" rel="nofollow" target="_blank" style="color:rgb(26,115,232);text-decoration-line:none"></a><ul><a href="https://icmi.acm.org/2022/workshops/#1" rel="nofollow" target="_blank" style="color:rgb(26,115,232);text-decoration-line:none"></a><li style="margin-left:15px"><a href="https://icmi.acm.org/2022/workshops/#1" rel="nofollow" target="_blank" style="color:rgb(26,115,232);text-decoration-line:none"></a><a href="https://icmi.acm.org/2022/workshops/#1" rel="nofollow" target="_blank" style="color:rgb(26,115,232);text-decoration-line:none">Workshop on Multimodal Affect and Aesthetic Experience</a></li><li style="margin-left:15px"><a href="https://icmi.acm.org/2022/workshops/#2" rel="nofollow" target="_blank" style="color:rgb(26,115,232);text-decoration-line:none">Voice Assistant Systems in Team Interactions – Implications, Best Practice, Applications, and Future Perspectives (VASTI’22)</a></li><li style="margin-left:15px"><a href="https://icmi.acm.org/2022/workshops/#3" rel="nofollow" target="_blank" style="color:rgb(26,115,232);text-decoration-line:none">The GENEA Workshop 2022: The 3rd Workshop on Generation and Evaluation of Non-verbal Behaviour for Embodied Agents</a></li><li style="margin-left:15px"><a href="https://icmi.acm.org/2022/workshops/#4" rel="nofollow" target="_blank" style="color:rgb(26,115,232);text-decoration-line:none">2nd International Workshop on Deep Video Understanding</a></li><li style="margin-left:15px"><a href="https://icmi.acm.org/2022/workshops/#5" rel="nofollow" target="_blank" style="color:rgb(26,115,232);text-decoration-line:none">MSECP-Wild: The 4th Workshop on Modeling Socio-Emotional and Cognitive Processes from Multimodal Data In-the-Wild</a></li><li style="margin-left:15px"><a href="https://icmi.acm.org/2022/workshops/#6" rel="nofollow" target="_blank" style="color:rgb(26,115,232);text-decoration-line:none">3rd Workshop on Social Affective Multimodal Interaction for Health (SAMIH)</a></li></ul><div><div>----------------------------------------------------------------------------------------------------------------------------------------------</div></div></div></div></div>