<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title></title>
</head>
<body>
<div name="messageBodySection">
<div style="text-align:justify;font-size: 10pt"><span style="background-color:transparent;font-family:Arial;font-size: 10pt">We cordially invite you to participate in our </span><strong style="background-color:transparent;font-family:Arial;font-size: 10pt"><em>ICCV’2021 Understanding Social Behavior in Dyadic and Small Group Interactions Workshop & Challenge</em></strong></div>
<div dir="auto"><span style="font-size: 10pt"><br /></span></div>
<div style="text-align:center;font-size: 13.999999999999998pt"><strong style="background-color:transparent;font-family:Arial;font-size: 13.999999999999998pt"><em>Workshop </em></strong><span style="color:#000000;font-family:Arial;font-size: 13.999999999999998pt">description</span></div>
<div style="text-align:center;font-size: 10pt"><a style="background-color:transparent;font-family:Arial;font-size: 10pt" href="http://chalearnlap.cvc.uab.es/workshop/44/description/" target="_blank"><em>http://chalearnlap.cvc.uab.es/workshop/44/description/</em></a><em style="background-color:transparent;font-family:Arial;font-size: 10pt"> </em></div>
<div style="text-align:justify;font-size: 10pt"><span style="color:#000000;font-family:Arial;font-size: 10pt">Human interaction has been a central topic in psychology and social sciences, aiming at explaining the complex underlying mechanisms of communication with respect to cognitive, affective and behavioral perspectives. From a computational point of view, research in dyadic and small group interactions enables the development of automatic approaches for detection, understanding, modeling and synthesis of individual and interpersonal social signals and dynamics. Many human-centered applications for good (e.g., early diagnosis and intervention, augmented telepresence and personalized agents) depend on devising solutions for such tasks. </span></div>
<div style="text-align:justify;font-size: 10pt"><span style="color:#000000;font-family:Arial;font-size: 10pt">Verbal and nonverbal communication channels are used in dyadic and small group interactions to convey our goals and intentions while building a common ground. During interactions, people influence each other based on the cues they perceive. However, the way we perceive, interpret, react, and adapt to them depends on a myriad of factors (e.g., our personal characteristics, either stable or transient; the relationship and shared history between individuals; the characteristics of the situation and task at hand; societal norms; and environmental factors). To analyze individual behaviors during a conversation, the joint modeling of participants is required due to the existing dyadic or group interdependencies. While these aspects are usually contemplated in non-computational dyadic research, context- and interlocutor-aware computational approaches are still scarce, largely due to the lack of datasets providing contextual metadata in different situations and populations. </span></div>
<div style="text-align:justify;font-size: 10pt"><span style="color:#000000;font-family:Arial;font-size: 10pt">Topics and Motivation: In line with these, we would like to bring together researchers in the field and from related disciplines to discuss the advances and new challenges on the topic of dyadic and small group interactions. We want to put a spotlight on the strengths and limitations of the existing approaches, and define the future directions of the field. In this context, we accept papers addressing the issues related to, but not limited to, these topics:</span></div>
<ul>
<li style="font-size: 10pt"><span style="color:#000000;background-color:transparent;font-family:Arial;font-size: 10pt">Detection, understanding, modeling and synthesis of individual and interpersonal social signals and dynamics;</span></li>
<li style="font-size: 10pt"><span style="color:#000000;background-color:transparent;font-family:Arial;font-size: 10pt">Verbal / nonverbal communication analysis in dyadic and small groups;</span></li>
<li style="font-size: 10pt"><span style="color:#000000;background-color:transparent;font-family:Arial;font-size: 10pt">Contextual analysis in dyadic and small groups;</span></li>
<li style="font-size: 10pt"><span style="color:#000000;background-color:transparent;font-family:Arial;font-size: 10pt">Datasets, annotation protocols and bias discovering/mitigation methods in dyadic and small groups;</span></li>
<li style="font-size: 10pt"><span style="color:#000000;background-color:transparent;font-family:Arial;font-size: 10pt">Interpretability / Explainability in dyadic and small groups;</span></li>
</ul>
<div style="text-align:justify;font-size: 10pt"><span style="color:#000000;font-family:Arial;font-size: 10pt">Workshop papers will be published in two different venues, detailed next. </span></div>
<ol type="1">
<li style="text-align:justify;font-size: 10pt"><span style="color:#000000;background-color:transparent;font-family:Arial;font-size: 10pt">Papers submitted following our “ICCV Workshop schedule” will use the ICCV format and will be published in the proceedings of ICCV’2021. </span></li>
</ol>
<ul>
<li style="font-size: 10pt"><span style="color:#000000;background-color:transparent;font-family:Arial;font-size: 10pt">Paper submission (ICCV): July 25, 2021</span></li>
<li style="font-size: 10pt"><span style="color:#000000;background-color:transparent;font-family:Arial;font-size: 10pt">Author notification (ICCV): September 10th, 2021</span></li>
<li style="font-size: 10pt"><span style="color:#000000;background-color:transparent;font-family:Arial;font-size: 10pt">Camera-ready (ICCV): September 16th, 2021</span></li>
</ul>
<div dir="auto"><span style="font-size: 10pt"><br /></span></div>
<ol type="1">
<li style="text-align:justify;font-size: 10pt"><span style="color:#000000;background-color:transparent;font-family:Arial;font-size: 10pt">Papers submitted following our “PMLR Workshop schedule” will use the PMLR format and will be published in Proceedings of Machine Learning Research (PMLR).</span></li>
</ol>
<ul>
<li style="font-size: 10pt"><span style="color:#000000;background-color:transparent;font-family:Arial;font-size: 10pt">Paper submission (PMLR): October 31th, 2021</span></li>
<li style="font-size: 10pt"><span style="color:#000000;background-color:transparent;font-family:Arial;font-size: 10pt">Author notification (PMLR): November 30th, 2021</span></li>
<li style="font-size: 10pt"><span style="color:#000000;background-color:transparent;font-family:Arial;font-size: 10pt">Camera-ready (PMLR): December 20th, 2021</span></li>
</ul>
<div dir="auto"><span style="font-size: 10pt"><br /></span></div>
<div style="text-align:justify;font-size: 10pt"><span style="color:#000000;font-family:Arial;font-size: 10pt">INVITED SPEAKERS:</span></div>
<div style="text-align:justify;font-size: 10pt"><span style="color:#000000;font-family:Arial;font-size: 10pt">Louis-Philippe Morency, Carnegie Mellon University, USA</span></div>
<div style="text-align:justify;font-size: 10pt"><span style="color:#000000;font-family:Arial;font-size: 10pt">Alexander Todorov, Princeton University, USA</span></div>
<div style="text-align:justify;font-size: 10pt"><span style="color:#000000;font-family:Arial;font-size: 10pt">Hatice Gunes, University of Cambridge, UK</span></div>
<div style="text-align:justify;font-size: 10pt"><span style="color:#000000;font-family:Arial;font-size: 10pt">Daniel Gatica-Perez, IDIAP, Switzerland</span></div>
<div style="text-align:justify;font-size: 10pt"><span style="color:#000000;font-family:Arial;font-size: 10pt">Qiang Ji, Rensselaer Polytechnic Institute, USA</span></div>
<div style="text-align:justify;font-size: 10pt"><span style="color:#000000;font-family:Arial;font-size: 10pt">Yaser Sheikh, Carnegie Mellon University, USA</span></div>
<div style="text-align:justify;font-size: 10pt"><span style="color:#000000;font-family:Arial;font-size: 10pt">Norah Dunbar, UC Santa Barbara, USA</span></div>
<div dir="auto"><span style="font-size: 10pt"><br /></span></div>
<div style="text-align:center;font-size: 13.999999999999998pt"><strong style="background-color:transparent;font-family:Arial;font-size: 13.999999999999998pt"><em>Challenge </em></strong><span style="color:#000000;font-family:Arial;font-size: 13.999999999999998pt">description</span></div>
<div style="text-align:center;font-size: 10pt"><a style="background-color:transparent;font-family:Arial;font-size: 10pt" href="http://chalearnlap.cvc.uab.es/challenge/45/description/" target="_blank">http://chalearnlap.cvc.uab.es/challenge/45/description/</a><span style="background-color:transparent;font-family:Arial;font-size: 10pt"> </span></div>
<div style="text-align:justify;font-size: 10pt"><span style="color:#000000;font-family:Arial;font-size: 10pt">To advance and motivate the research on visual human behavior analysis in dyadic and small group interactions, the challenge will use a large scale, multimodal, and multiview (</span><a style="font-family:Arial;font-size: 10pt" href="https://openaccess.thecvf.com/content/WACV2021W/HBU/papers/Palmero_Context-Aware_Personality_Inference_in_Dyadic_Scenarios_Introducing_the_UDIVA_Dataset_WACVW_2021_paper.pdf" target="_blank">UDIVA</a><span style="color:#000000;font-family:Arial;font-size: 10pt">) dataset recently collected by our group, which provides many related challenges. It will address two different problems, divided in two competition tracks:</span></div>
<ol type="1">
<li style="text-align:justify;font-size: 10pt"><span style="color:#000000;background-color:transparent;font-family:Arial;font-size: 10pt">Automatic self-reported personality recognition of single individuals (i.e., a target person) during a dyadic interaction, from two individual views. </span></li>
<li style="text-align:justify;font-size: 10pt"><span style="color:#000000;background-color:transparent;font-family:Arial;font-size: 10pt">Behavior forecasting: the focus of this track will be to estimate future (up to N frames) 2D facial landmarks, hand, and upper body pose of a target individual in a dyadic interaction.</span></li>
</ol>
<div dir="auto"><span style="font-size: 10pt"><br /></span></div>
<div style="text-align:justify;font-size: 10pt"><span style="color:#000000;font-family:Arial;font-size: 10pt">In both tasks, multiview and multimodal information (audio-visual, transcriptions, context and medatada) are expected to be exploited to solve the problem.</span></div>
<div dir="auto"><span style="font-size: 10pt"><br /></span></div>
<div style="text-align:justify;font-size: 10pt"><span style="color:#000000;font-family:Arial;font-size: 10pt">Important Dates:</span></div>
<div dir="auto"><span style="font-size: 10pt"><br /></span></div>
<ul>
<li style="font-size: 10pt"><span style="color:#000000;background-color:transparent;font-family:Arial;font-size: 10pt">Dataset access request period open: 18th May, 2021</span></li>
<li style="font-size: 10pt"><span style="color:#000000;background-color:transparent;font-family:Arial;font-size: 10pt">Start of the Challenge (development phase): June 1st, 2021</span></li>
<li style="font-size: 10pt"><span style="color:#000000;background-color:transparent;font-family:Arial;font-size: 10pt">Start of test phase: September 1st, 2021</span></li>
<li style="font-size: 10pt"><span style="color:#000000;background-color:transparent;font-family:Arial;font-size: 10pt">End of the Challenge: September 17th, 2021</span></li>
<li style="font-size: 10pt"><span style="color:#000000;background-color:transparent;font-family:Arial;font-size: 10pt">Release of final results: September 30th, 2021</span></li>
</ul>
<div dir="auto"><span style="font-size: 10pt"><br /></span></div>
<div style="text-align:justify;font-size: 10pt"><span style="color:#000000;font-family:Arial;font-size: 10pt">Top winning solutions will be invited to give a talk to present their work at the associated ICCV 2021 ChaLearn workshop (</span><a style="font-family:Arial;font-size: 10pt" href="http://chalearnlap.cvc.uab.es/workshop/44/description/" target="_blank">http://chalearnlap.cvc.uab.es/workshop/44/description/</a><span style="color:#000000;font-family:Arial;font-size: 10pt">).</span></div>
<div dir="auto"><span style="font-size: 10pt"><br /></span></div>
<div style="text-align:justify;font-size: 10pt"><span style="color:#000000;font-family:Arial;font-size: 10pt">ORGANIZATION and CONTACT*</span></div>
<div style="text-align:justify;font-size: 10pt"><span style="color:#000000;font-family:Arial;font-size: 10pt">Sergio Escalera*, Computer Vision Center (CVC) and University of Barcelona, Spain <</span><a style="font-family:Arial;font-size: 10pt" href="https://mailto:sergio.escalera.guerrero@gmail.com" target="_blank">sergio.escalera.guerrero@gmail.com</a><span style="color:#000000;font-family:Arial;font-size: 10pt">> </span></div>
<div style="text-align:justify;font-size: 10pt"><span style="color:#000000;font-family:Arial;font-size: 10pt">Cristina Palmero*, Computer Vision Center (CVC) and University of Barcelona, Spain <</span><a style="font-family:Arial;font-size: 10pt" href="https://mailto:c.palmero.cantarino@gmail.com" target="_blank">c.palmero.cantarino@gmail.com</a><span style="color:#000000;font-family:Arial;font-size: 10pt">> </span></div>
<div style="text-align:justify;font-size: 10pt"><span style="color:#000000;font-family:Arial;font-size: 10pt">Wei-Wei Tu, 4Paradigm Inc., China</span></div>
<div style="text-align:justify;font-size: 10pt"><span style="color:#000000;font-family:Arial;font-size: 10pt">Albert Clapés, Computer Vision Center (CVC), Spain</span></div>
<div style="text-align:justify;font-size: 10pt"><span style="color:#000000;font-family:Arial;font-size: 10pt">Julio C. S. Jacques Junior, Computer Vision Center (CVC/UAB), Spain</span></div>
<div dir="auto"><span style="font-size: 10pt"><br /></span></div>
<div style="text-align:justify;font-size: 10pt"><span style="color:#000000;font-family:Arial;font-size: 10pt">Sponsors: This event is sponsored by ChaLearn, 4Paradigm Inc., and Facebook Reality Labs. University of Barcelona, Computer Vision Center at Autonomous University of Barcelona, and Human Pose Recovery and Behavior Analysis (HuPBA) group, are the co-sponsors of the Challenge.</span></div>
<div dir="auto"><span style="font-size: 10pt"><br /></span></div>
<div style="text-align:justify;font-size: 10pt"><span style="color:#000000;font-family:Arial;font-size: 10pt">Prizes: Top winning solutions will be invited to give a talk to present their work at the associated ICCV 2021 ChaLearn workshop, will receive a winning certificate and will have free ICCV registration. Our sponsors are also offering the following prizes:</span></div>
<ul>
<li style="text-align:justify;font-size: 10pt"><span style="color:#000000;background-color:transparent;font-family:Arial;font-size: 10pt">Track 1: Top-1 solution: 1000$ / Top-2 solution: 500$ / Top-3 solution: 300$</span></li>
<li style="text-align:justify;font-size: 10pt"><span style="color:#000000;background-color:transparent;font-family:Arial;font-size: 10pt">Track 2: Top-1 solution: 1000$ / Top-2 solution: 500$ / Top-3 solution: 300$</span></li>
</ul>
<div dir="auto"><span style="font-size: 10pt"><br /></span><span style="color:#000000;font-family:Arial;font-size: 10pt">Honorable mention: based on the significance of the result in a particular trait/s (track 1) or body part (track 2) and the level of novelty/originality of the solution, in addition to top-3 solutions, we may announce additional </span><em style="color:#000000;font-family:Arial;font-size: 10pt">honorable mentions</em><span style="color:#000000;font-family:Arial;font-size: 10pt">, which will also receive a winning certificate and a free ICCV registration.</span></div>
</div>
</body>
</html>