<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div class="">All,</div><div class=""><br class=""></div><div class="">Please consider submitting your research to our ICML 2017 workshop. See the CFP below.</div><div class=""><br class=""></div><div class="">Bert</div><div class=""><br class=""></div><div class=""><div style="font-family: 'Helvetica Neue'; font-size: 18px;" class="">--</div><div style="font-family: 'Helvetica Neue'; font-size: 18px;" class="">Bert Huang, Ph.D. </div><div style="font-family: 'Helvetica Neue'; font-size: 18px;" class="">Assistant Professor</div><div style="font-family: 'Helvetica Neue'; font-size: 18px;" class="">Dept. of Computer Science, Virginia Tech</div><div style="font-family: 'Helvetica Neue'; font-size: 18px;" class=""><a href="http://berthuang.com" class="">http://berthuang.com</a></div></div><div style="font-family: 'Helvetica Neue'; font-size: 18px;" class=""><br class=""></div><div style="font-family: 'Helvetica Neue'; font-size: 18px;" class=""><br class=""></div><div class=""><br class=""></div><div class="">=== Call for papers: Deep Structured Prediction === </div><div class="">Workshop of the International Conference on Machine Learning (ICML) 2017 </div><div class="">Sydney, Australia </div><div class="">11 August 2017</div><div class="">Website: <a href="https://deepstruct.github.io/ICML17" class="">https://deepstruct.github.io/ICML17</a></div><div class="">TL;DR: 4 pages, in ICML format, submit by June 2nd PT.</div><div class=""> </div><div class="">Deep learning has revolutionized machine learning for many domains and problems. Today, most successful applications of deep learning involve predicting single variables (like univariate regression or multi-class classification). However, many real problems involve highly dependent, structured variables. In such scenarios, it is desired or even necessary to model correlations and dependencies between the multiple input and output variables. Such problems arise in a wide range of domains, from natural language processing, computer vision, computational biology and others. </div><div class=""><br class=""></div><div class="">Some approaches to these problems directly use deep learning concepts, such as those that generate sequences using recurrent neural networks or that output image segmentations through convolutions. Others adapt the concepts from structured output learning. These structured output prediction problems were traditionally handled using linear models and hand-crafted features, with a structured optimization such as inference. It has recently been proposed to combine the representational power of deep neural networks with modeling variable dependence in a structured prediction framework. There are numerous interesting research questions related to modeling and optimization that arise in this problem space.</div><div class=""><br class=""></div><div class="">The workshop will bring together experts in machine learning and application domains whose research focuses on combining deep learning and structured models. Specifically, it will provide an overview of existing approaches from various domains to distill from their success principles that can be more generally applicable. We will also discuss the main challenges arising in this setting and outline potential directions for future progress. The target audience consists of researchers and practitioners in machine learning and application areas. </div><div class=""> </div><div class="">We invite the submission of short papers no longer than four pages, including references, addressing machine learning research that intersects structured prediction and deep learning, including any of the following topics: </div><div class="">Deep learning approaches for structured-output problems</div><div class="">Integration of deep learning with structured-output learning</div><div class="">End-to-end learning of probabilistic models with non-linear potentials</div><div class="">Deep learning applications with dependent inputs or outputs</div><div class=""><br class=""></div><div class="">Papers should be formatted according to the ICML template: </div><div class="">(<a href="http://media.nips.cc/Conferences/ICML2017/icml2017.tgz" class="">http://media.nips.cc/Conferences/ICML2017/icml2017.tgz</a>). </div><div class="">Only papers using the above template will be considered. Word templates will not be provided. </div><div class=""> </div><div class="">Papers should be submitted through easychair at the following address: </div><div class=""><a href="https://easychair.org/conferences/?conf=1stdeepstructws" class="">https://easychair.org/conferences/?conf=1stdeepstructws</a> </div><div class=""><br class=""></div><div class="">Papers will be reviewed for relevance and quality. Accepted papers will be posted online. Authors of high-quality papers will be offered oral presentations at the workshop, and we will award a best-paper and runner-up prize.</div><div class=""> </div><div class="">=== Important Dates === </div><div class="">***Submission deadline: June 2, 2017</div><div class="">***Notification of acceptance: June 18, 2017 </div><div class="">***Camera-ready deadline: August 1, 2017 </div><div class=""><br class=""></div><div class="">=== Program committee === </div><div class="">David Belanger, University of Massachusetts Amherst</div><div class="">Matthew Blaschko, KU Leuven</div><div class="">Ryan Cotterell, Johns Hopkins University</div><div class="">Ming-Wei Chang, Microsoft Research</div><div class="">Raia Hadsell, Google DeepMind</div><div class="">Hal Daumé III, University of Maryland</div><div class="">Justin Domke, University of Massachusetts Amherst</div><div class="">Andrew McCallum, University of Massachusetts Amherst</div><div class="">Eliyahu Kiperwasser, Bar-Ilan University</div><div class="">Jason Naradowsky, University of Cambridge</div><div class="">Sebastian Nowozin, Microsoft Research, Cambridge, UK</div><div class="">Nanyun Peng, Johns Hopkins University</div><div class="">Amirmohammad Rooshenas, University of Oregon</div><div class="">Dan Roth, University of Illinois at Urbana-Champaign</div><div class="">Alexander Rush, Harvard University </div><div class="">Sameer Singh, University of California Irvine</div><div class="">Uri Shalit, New York University</div><div class="">Andreas Vlachos, University of Sheffield</div><div class="">Yi Yang, Georgia Institute of Technology</div><div class="">Scott Yih, Microsoft Research</div><div class="">Yangfeng Ji, University of Washington</div><div class="">Yisong Yue, California Institute of Technology</div><div class="">Shuai Zheng, eBay</div><div class=""><br class=""></div><div class=""> </div><div class="">=== Workshop Organizers === </div><div class="">Isabelle Augenstein, University College London</div><div class="">Kai-Wei Chang, University of California Los Angeles</div><div class="">Gal Chechik, Bar-Ilan University / Google</div><div class="">Bert Huang, Virginia Tech</div><div class="">André Martins, Unbabel and Instituto de Telecomunicacoes</div><div class="">Ofer Meshi, Google</div><div class="">Alexander Schwing, University of Illinois Urbana-Champaign</div></body></html>