<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:x="urn:schemas-microsoft-com:office:excel" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
p.msonormal0, li.msonormal0, div.msonormal0
{mso-style-name:msonormal;
mso-margin-top-alt:auto;
margin-right:0cm;
mso-margin-bottom-alt:auto;
margin-left:0cm;
font-size:12.0pt;
font-family:"Times New Roman",serif;}
span.EmailStyle18
{mso-style-type:personal;
font-family:"Calibri",sans-serif;
color:windowtext;}
span.EmailStyle19
{mso-style-type:personal;
font-family:"Calibri",sans-serif;
color:#1F497D;}
span.EmailStyle20
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="IT" link="#0563C1" vlink="#954F72">
<div class="WordSection1">
<p class="MsoNormal"><span lang="EN-US">AVHRC 2020 - Active Vision and perception in Human(-Robot) Collaboration Workshop<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">@RO-MAN 2020 - THE 29TH IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">NAPLES, ITALY, FROM AUGUST 31 TO SEPTEMBER 4, 2020.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Key Dates<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">=========<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Submission opening: May 1, 2020<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Submission deadline: June 25, 2020<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Notification: July 15, 2020<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Camera ready July 30, 2020<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Workshop: August 31, 2020<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">**Workshop website: ***<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Under construction.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">**Submission website: ***<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Not available yet <o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Publication<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">============<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">All accepted papers will be published on the workshop website.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Selected papers will be published in a dedicated special issue of a high quality open access journal, e.g. Frontiers in Neurorobotics.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">A best paper award will be announced, offering a full publication fee waiver.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> <o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Submission Guidelines<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">=====================<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Two types of submissions are invited to the workshop: long papers (6 to 8 pages + n references pages) and short papers (2-4 pages + n references pages). In both cases there is no page limit for the bibliography/references
(n pages) section. <o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">All submissions should be formatted according to the standard IEEE RAS Formatting Instructions and Templates available at
<a href="http://ras.papercept.net/conferences/support/tex.php">http://ras.papercept.net/conferences/support/tex.php</a>. Authors are required to submit their papers electronically in PDF format.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">At least one author of each accepted paper must register for the workshop.
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">For any questions regarding paper submission, please email us:
<a href="mailto:dimitri.ognibene@gmail.com">dimitri.ognibene@gmail.com</a><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Presentation<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">==============<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Papers will be presented in short talks and/or poster spotlights.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">The organisers would like to reassure authors that, independently of any potential restriction due to the COVID-19 situation, it will be possible to present all accepted papers and to attend the keynotes, either in person
or remotely, following the same rules and the same procedure of the main conference. At what is a difficult time for many people, we look forward to sharing our work with the community despite any restrictions and we invite interested colleagues to join us.
More information can be found here: <a href="http://ro-man2020.unina.it/announcements.php">
http://ro-man2020.unina.it/announcements.php</a><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Topics<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">========<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Active perception for intention and action prediction<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Activity and action recognition in the wild<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Active perception for social interaction<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Active perception for (collaborative) navigation<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Human-robot collaboration in unstructured environments<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Human-robot collaboration in presence of sensory limits<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Joint human-robot search and exploration<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Testing setup for social perception in real or virtual environments<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Setup for transferring active perception skills from humans to robots<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Machine learning methods for active social perception<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Benchmarking and quantitative evaluation with human subject experiments<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Gaze-based factors for intuitive human-robot collaboration<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Active perception modelling for social interaction and collaboration<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Head-mounted eye tracking and gaze estimation during social interaction<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Estimation and guidance of partner situation awareness and attentional state in human-robot collaboration<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Multimodal social perception<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Adaptive social perception<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Egocentric vision in social interaction;<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Explicit and implicit sensorimotor communication;<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Social attention;<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Natural human-robot (machine) interaction;<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Collaborative exploration;<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Joint attention;<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Multimodal social attention;<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Attentive activity recognition;<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Belief and mental state attribution in robots;<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Cognitive Robotics<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Background<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">=============<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Humans naturally interact and collaborate in unstructured social environments, which produce an overwhelming amount of information and may yet hide behaviorally relevant variables. Finding the underlying design principles
that allow humans to adaptively find and select relevant information is important for Robotics but also other fields, such as Computational Neuroscience, Interaction Design, and Computer Vision.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Current solutions address specific domains, e.g. autonomous cars, and usually employ over-redundant, expensive, and computationally demanding sensory systems that attempt to cover the wide set of environmental conditions
which the systems may have to deal with. Adaptive control of the sensors and of the perception process is a key solution found by nature to cope with such problems, as shown by the foveal anatomy of the eye and its high mobility.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Alongside this interest in “active” vision, collaborative robotics has recently progressed to human-robot interaction in real manufacturing processes. Measuring and modelling task-specific gaze behaviours seems to be
essential for smooth human-robot interaction. Indeed, anticipatory control for human-in-the-loop architectures, which can enable robots to proactively collaborate with humans, relies heavily on observing the gaze and actions patterns of the human partner.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">We would like to solicit manuscripts that present novel computational and robotic models, theories and experimental results as well as reviews relevant to these topics. Submissions will further our understanding of how
humans actively control their perception during social interaction and in which conditions they fail, and how these insights may enable natural interaction between humans and artificial systems in non-trivial conditions.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> Organizers<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">==================<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Main organizer:<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Dimitri Ognibene, University of Essex, UK & University of Milano-Bicocca, Italy<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"><o:p> </o:p></span></p>
<p class="MsoNormal">Communication Organisers:<o:p></o:p></p>
<p class="MsoNormal">Francesco Rea, Instituto Italiano di Tecnologia, Italy<o:p></o:p></p>
<p class="MsoNormal">Francesca Bianco,University of Essex, UK<o:p></o:p></p>
<p class="MsoNormal"><span lang="EN-US">Vito Trianni, ISTC-CNR, Italy<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Ayse Kucukyilmaz, University of Nottingham, UK<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Review Organisers:<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Angela Faragasso, The University of Tokyo, Japan<o:p></o:p></span></p>
<p class="MsoNormal">Manuela Chessa, University of Genova<o:p></o:p></p>
<p class="MsoNormal">Fabio Solari, University of Genova<o:p></o:p></p>
<p class="MsoNormal"><span lang="EN-US">David Rudrauf, University of Geneve, Switzerland<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Yan Wu, Robotics Department, Institute for Infocomm Research, A*STAR, Singapore<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Publication Organisers:<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Fiora Pirri, Sapienza - University of Rome, Italy<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Letizia Marchegiani, Aalborg University, Denmark<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Tom Foulsham, University of Essex, UK<o:p></o:p></span></p>
<p class="MsoNormal">Giovanni Maria Farinella, University of Catania, Italy<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
</body>
</html>