<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<style type="text/css" style="display:none;"><!-- P {margin-top:0;margin-bottom:0;} --></style>
</head>
<body dir="ltr">
<div id="divtagdefaultwrapper" style="font-size:12pt;color:#000000;font-family:Calibri,Arial,Helvetica,sans-serif;" dir="ltr">
Call For Papers<br>
<div style="color: rgb(0, 0, 0);">
<div>
<div id="divtagdefaultwrapper" dir="ltr" style="font-size:12pt; color:#000000; font-family:Calibri,Arial,Helvetica,sans-serif">
<div style="color:rgb(0,0,0)">
<div>
<div id="divtagdefaultwrapper" dir="ltr" style="font-size:12pt; color:#000000; font-family:Calibri,Arial,Helvetica,sans-serif">
<div><br>
1st MUltimodal Learning and Applications Workshop (MULA 2018)<br>
Satellite ECCV 2018 Workshop <br>
<br>
The exploitation of the power of big data in the last few years led to a big step forward in many applications of Computer Vision. However, most of the tasks tackled so far are involving mainly visual modality due to the unbalanced number of labelled samples
available among modalities (e.g., there are many huge labelled datasets for images while not as many for audio or IMU based classification), resulting in a huge gap in performance when algorithms are trained separately.<br>
This workshop aims to bring together communities of machine learning and multimodal data fusion. We expect contributions involving video, audio, depth, IR, IMU, laser, text, drawings, synthetic, etc. Position papers with feasibility studies and cross-modality
issues with highly applicative flair are also encouraged therefore we expect a positive response from academic and industrial communities.<br>
This is an open call for papers, soliciting original contributions considering recent findings in theory, methodologies, and applications in the field of multimodal machine learning. Potential topics include, but are not limited to:<br>
<br>
* Multimodal learning<br>
* Cross-modal learning<br>
* Self-supervised learning for multimodal data<br>
* Multimodal data generation and sensors<br>
* Unsupervised learning on multimodal data<br>
* Cross-modal adaptation<br>
* Multimodal data fusion<br>
* Multimodal transfer learning<br>
* Multimodal applications (e.g. drone vision, autonomous driving, industrial inspection, etc.)<br>
<br>
Organizers:<br>
Paolo Rota, Italian Institute of Technology (IIT), Italy<br>
Vittorio Murino, Italian Institute of Technology (IIT), Italy<br>
Michael Ying Yang, University of Twente, Netherlands<br>
Bodo Rosenhahn, Institut fuer Informationsverarbeitung, Leibniz-Universitaet Hannover, Germany<br>
<br>
Invited Speakers:<br>
Raquel Urtasun, Head of Uber ATG Toronto, University of Toronto, Canada<br>
Daniel Cremers, Head of TU Munich Department of Informatics, Germany<br>
<br>
Submission:<br>
Papers will be limited to 14 pages according to the ECCV format (c.f. main conference authors guidelines). All papers will be reviewed by at least two reviewers with double-blind policy. Papers will be published in ECCV 2018 proceedings.<br>
<br>
Important Dates<br>
Deadline for submission: May 15, 2018<br>
Notification of acceptance: July 1, 2018<br>
<div>
<p><span style="font-size: 12pt; font-family: Calibri, Arial, Helvetica, sans-serif; color: rgb(34, 34, 34);">Workshop Date: September 8 (afternoon), 2018</span><span style="font-family:"Calibri",sans-serif;
color:black"></span></p>
</div>
<br>
More info at MULA website: https://mula2018.github.io<br>
<br>
</div>
<br>
<p></p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</body>
</html>