<div dir="ltr"><span id="gmail-docs-internal-guid-54a2ca55-7fff-79f7-bf18-497f4d11d918"><p dir="ltr" style="line-height:1.656;margin-top:10pt;margin-bottom:0pt"><span style="font-size:15pt;font-family:Georgia;background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">IROS 2021 Organized Session Call for Papers (Deadline: Mar. 05, 2021)</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="line-height:1.656;text-align:center;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Georgia;color:rgb(255,0,0);background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Accept both RA-L with IROS papers and IROS-only papers</span></p><p dir="ltr" style="line-height:1.656;text-align:center;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Georgia;background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Autonomous Vehicle Vision (AVVision)</span></p><p dir="ltr" style="line-height:1.656;text-align:center;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Georgia;background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Organized Session Website (</span><a href="https://avvision.xyz/iros21/" style="text-decoration-line:none"><span style="font-size:11pt;font-family:Georgia;background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">https://avvision.xyz/iros21/</span></a><span style="font-size:11pt;font-family:Georgia;background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">)</span></p><p dir="ltr" style="line-height:1.656;text-align:justify;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="line-height:1.656;text-align:justify;margin-top:0pt;margin-bottom:0pt;padding:0pt 0pt 10pt"><span style="font-size:11pt;font-family:Georgia;color:rgb(80,80,80);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">With a number of breakthroughs in autonomous system technology over the past decade, the race to commercialize self-driving cars has become fiercer than ever. The integration of advanced sensing, computer vision, signal/image processing, and machine/deep learning into autonomous vehicles enables them to perceive the environment intelligently and navigate safely. Autonomous driving is required to ensure safe, reliable, and efficient automated mobility in complex uncontrolled real-world environments. Various applications range from automated transportation and farming to public safety and environment exploration. Visual perception is a critical component of autonomous driving. Enabling technologies include: a) affordable sensors that can acquire useful data under varying environmental conditions, b) reliable simultaneous localization and mapping, c) machine learning that can effectively handle varying real-world conditions and unforeseen events, as well as “machine-learning friendly” signal processing to enable more effective classification and decision making, and d) resilient and robust platforms that can withstand adversarial attacks and failures. The</span><a href="https://avvision.xyz/" style="text-decoration-line:none"><span style="font-size:11pt;font-family:Georgia;color:rgb(80,80,80);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> </span><span style="font-size:11pt;font-family:Georgia;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">Autonomous Vehicle Vision (AVVision) Special Session</span></a><span style="font-size:11pt;font-family:Georgia;color:rgb(80,80,80);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> at</span><a href="https://www.iros2021.org/" style="text-decoration-line:none"><span style="font-size:11pt;font-family:Georgia;color:rgb(80,80,80);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> </span><span style="font-size:11pt;font-family:Georgia;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021)</span></a><span style="font-size:11pt;font-family:Georgia;color:rgb(80,80,80);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> will cover all these topics. Research papers are solicited in, but not limited to, the following topics:</span></p><p dir="ltr" style="line-height:1.44;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Georgia;color:rgb(80,80,80);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">• 3D road/environment reconstruction and understanding;</span></p><p dir="ltr" style="line-height:1.44;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Georgia;color:rgb(80,80,80);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">• Mapping and localization for autonomous cars;</span></p><p dir="ltr" style="line-height:1.44;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Georgia;color:rgb(80,80,80);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">• Semantic/instance driving scene segmentation;</span></p><p dir="ltr" style="line-height:1.44;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Georgia;color:rgb(80,80,80);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">• Self-supervised/unsupervised visual environment perception;</span></p><p dir="ltr" style="line-height:1.44;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Georgia;color:rgb(80,80,80);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">• Car/pedestrian/obstacle detection/tracking and 3D localization;</span></p><p dir="ltr" style="line-height:1.44;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Georgia;color:rgb(80,80,80);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">• Car/license plate/road sign detection and recognition;</span></p><p dir="ltr" style="line-height:1.44;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Georgia;color:rgb(80,80,80);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">• Driver status monitoring and human-car interfaces;</span></p><p dir="ltr" style="line-height:1.44;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Georgia;color:rgb(80,80,80);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">• Deep/machine learning and image analysis for car perception;</span></p><p dir="ltr" style="line-height:1.44;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Georgia;color:rgb(80,80,80);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">• Adversarial domain adaptation for autonomous driving;</span></p><p dir="ltr" style="line-height:1.44;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Georgia;color:rgb(80,80,80);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">• On-board embedded visual perception systems;</span></p><p dir="ltr" style="line-height:1.44;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Georgia;color:rgb(80,80,80);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">• Bio-inspired vision sensing for car perception;</span></p><p dir="ltr" style="line-height:1.44;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Georgia;color:rgb(80,80,80);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">• Real-time deep learning inference.</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt;padding:10pt 0pt"><span style="font-size:11pt;font-family:Georgia;background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Organizers </span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Georgia;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Dr. Rui Ranger Fan, UC San Diego</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Georgia;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Dr. Nemanja Djuric, Aurora Technologies</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Georgia;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Dr. Wenshuo Wang, McGill University </span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"> </p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Georgia;background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Important Dates</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Georgia;color:rgb(80,80,80);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Full Paper Submission: 03/05/2021</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt;padding:0pt 0pt 10pt"><span style="font-size:11pt;font-family:Georgia;color:rgb(80,80,80);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Submission ID forwarding: 03/07/2021</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:10pt"><span style="font-size:11pt;font-family:Georgia;background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Submission [**Important**]</span></p><ol style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="list-style-type:decimal;font-size:11pt;font-family:Georgia;color:rgb(80,80,80);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre;margin-left:11pt"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">If you are submitting an RA-L with IROS paper, please simply forward your submission ID to us (avvision@mias.group) before 03/07/2021. </span></p></li><li dir="ltr" style="list-style-type:decimal;font-size:11pt;font-family:Georgia;color:rgb(80,80,80);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre;margin-left:11pt"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">If you are submitting an IROS-only paper, please select “</span><span style="font-size:11pt;background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">organized session paper</span><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">” instead of “contributed paper”, then input </span><span style="font-size:11pt;color:rgb(255,0,0);background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">tg5r4</span><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> into the </span><span style="font-size:11pt;background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">code</span><span style="font-size:11pt;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> section. After submitting your paper, please simply forward your submission ID to us  (avvision@mias.group) before 03/07/2021.</span></p></li></ol><p dir="ltr" style="line-height:1.656;margin-top:10pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:Georgia;color:rgb(80,80,80);background-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Contact</span></p><p dir="ltr" style="line-height:1.656;margin-top:0pt;margin-bottom:0pt"><a href="http://avvision@mias.group/" style="text-decoration-line:none"><span style="font-size:11pt;font-family:Georgia;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">avvision@mias.group</span></a></p><span style="font-size:11pt;font-family:Georgia;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">More information can be found at</span><a href="https://avvision.xyz/iros21/" style="text-decoration-line:none"><span style="font-size:11pt;font-family:Georgia;color:rgb(34,34,34);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> </span><span style="font-size:11pt;font-family:Georgia;background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">https://avvision.xyz/iros21/</span></a></span><br></div>