<div dir="ltr"><div class="gmail_default" style="font-family:trebuchet ms,sans-serif;font-size:small"><span style="font-family:Arial,Helvetica,sans-serif">Dear colleagues,</span></div><div class="gmail_default" style="font-family:trebuchet ms,sans-serif;font-size:small"><span style="font-family:Arial,Helvetica,sans-serif"><br></span></div><div class="gmail_default" style="font-family:trebuchet ms,sans-serif;font-size:small"><span style="font-family:Arial,Helvetica,sans-serif">the </span><b style="font-family:Arial,Helvetica,sans-serif">Computer Vision</b><span style="font-family:Arial,Helvetica,sans-serif"> section of </span><b style="font-family:Arial,Helvetica,sans-serif">Frontiers in Computer Science</b><span style="font-family:Arial,Helvetica,sans-serif"> welcomes original contributions in all relevant areas of computer vision, from both academia and industry:</span><br></div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div><div><span style="color:rgb(0,0,0)"><br></span></div><div>     <a href="https://www.frontiersin.org/journals/computer-science/sections/computer-vision" target="_blank">https://www.frontiersin.org/journals/computer-science/sections/computer-vision</a><span style="color:rgb(0,0,0)"><br></span></div><div><span style="color:rgb(0,0,0)"><br></span></div><div><span style="color:rgb(0,0,0)">Among the distinguishing features of </span><span style="color:rgb(0,0,0)">Frontiers</span><span style="color:rgb(0,0,0)">' open-access journals are fast publication time and an innovative collaborative peer-review process (see </span><a href="https://www.frontiersin.org/about/review-system?utm_source=fweb&utm_medium=fmain&utm_campaign=ba-cco-hpfea-review" target="_blank">here</a><span style="color:rgb(0,0,0)"> for details).</span></div><div><div style="color:rgb(0,0,0)"></div></div><div><br></div><div>We also particularly welcome Research Topics proposals on cutting-edge themes:<br></div><div><br></div><div>     <a href="https://www.frontiersin.org/about/research-topics" target="_blank">https://www.frontiersin.org/about/research-topics</a><br></div><div><br></div><div>Here's a list of the most recent ones:</div><div><ul><li><span class="gmail_default" style="font-family:"trebuchet ms",sans-serif"><a href="https://www.frontiersin.org/research-topics/36776/perceptual-organization-in-computer-and-biological-vision">Perceptual Organization in Computer and Biological Vision</a></span></li><li><a href="https://www.frontiersin.org/research-topics/46789/body-talks-advances-in-passive-visual-automated-body-analysis-for-biomedical-purposes">Body Talks: Advances in Passive Visual Automated Body Analysis for Biomedical Purposes</a></li><li><a href="https://www.frontiersin.org/research-topics/41756/optimization-methods-and-machine-learning-techniques-in-image-sciences">Optimization Methods and Machine Learning Techniques in Image Sciences</a></li><li><a href="https://www.frontiersin.org/research-topics/41202/trustworthy-visual-intelligence-from-theory-to-practice">Trustworthy Visual Intelligence: From Theory to Practice</a></li><li><a href="https://www.frontiersin.org/research-topics/40828/learning-deep-visual-understanding-models-with-limited-annotations">Learning Deep Visual Understanding Models with Limited Annotations</a></li><li><a href="https://www.frontiersin.org/research-topics/35813/advances-in-long-tail-learning">Advances in Long-Tail Learning</a></li><li><a href="https://www.frontiersin.org/research-topics/35632/making-visual-slam-useful-for-real-world-applications">Making Visual SLAM Useful for Real-World Applications</a></li><li><a href="https://www.frontiersin.org/research-topics/32305/novel-methods-for-human-face-and-body-perception-and-generation" target="_blank">Novel Methods for Human Face and Body Perception and Generation</a></li><li><a href="https://www.frontiersin.org/research-topics/31911/segmentation-and-classification-theories-algorithms-and-applications" target="_blank">Segmentation and Classification: Theories, Algorithms, and Applications</a></li><li><a href="https://www.frontiersin.org/research-topics/31801/deep-learning-with-few-labeled-training-data-in-computer-vision-and-image-analysis" target="_blank">Deep Learning with Few Labeled Training Data in Computer Vision and Image Analysis</a></li></ul></div></div><div><div><br></div><div>If you have any questions about the journal, feel free to contact me or the editorial office.<br></div><div><br></div><div>Best regards</div><div>-mp</div></div><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div></div></div></div></div></div></div></div></div></div></div></div></div></div></div><div><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><br></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div><div dir="ltr"><span style="font-family:"trebuchet ms",sans-serif"><span style="color:rgb(153,153,153)">--</span></span></div><div dir="ltr"><span style="font-family:"trebuchet ms",sans-serif"><span style="color:rgb(153,153,153)">Marcello Pelillo, <i>FIEEE, FIAPR, FAAIA</i><br>Professor of Computer Science</span></span></div><span style="font-family:"trebuchet ms",sans-serif"><span style="color:rgb(153,153,153)">Ca' Foscari University of Venice, Italy</span></span><i><br></i></div><div><span style="font-family:"trebuchet ms",sans-serif"><span style="color:rgb(153,153,153)">IEEE SMC Distinguished Lecturer</span></span></div><div><span style="font-family:"trebuchet ms",sans-serif"><span style="color:rgb(153,153,153)">Specialty Chief Editor, <i>Computer Vision - Frontiers in Computer Science<br></i></span></span></div></div></div></div></div></div></div></div></div></div></div></div></div></div>