<div dir="ltr"><div><br></div>The ICGI Steering Committee is calling for proposals on the broad topic of "<b>Transparency and Interpretability in Sequential Models</b>".<br><br>====================<br>Submission Deadline<br>

<span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">====================</span> <div>

<span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">May 15, 2018</span>

<br></div><div><br></div><div><br></div><div>

<span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">====================</span><br style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration-style:initial;text-decoration-color:initial">

Call for Proposals</div><div>

<span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration-style:initial;text-decoration-color:initial;background-color:rgb(255,255,255);float:none;display:inline">====================</span><br style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial">

 
<br>We are requesting position papers on how sequential models should be evaluated and/or designed for transparency. Proposals should address the questions of how to produce an explanation for an individual prediction and how to evaluate the quality of such explanation. Proposals must clearly describe the context for the proposed approach, including a description of the type of models to which the proposal applies. We welcome both proposals that address interpretability of black-box models as well as proposals tailored to a particular family of models. We also welcome proposals addressing interpretability in the context of specific applications involving sequential data, including natural language processing, biology and software engineering.<br><br><font size="4">

<span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);float:none;display:inline">====================</span><br style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"></font><div>Context</div><div>

<span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration-style:initial;text-decoration-color:initial;background-color:rgb(255,255,255);float:none;display:inline">====================</span><br style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial">

<br>The widespread adoption of ML and AI technologies raises ethical, technical and regulatory issues around fairness, transparency and accountability. Tackling these issues will require a community-wide effort ranging from the development of new mathematical and algorithmic tools to the understanding of the regulatory and ethical aspects of each of these concerns by academic and industry researchers.<br><br>A particular topic of growing interest is the capacity of holding data-driven algorithms accountable for their decisions. For example, the upcoming GDPR EU regulations require companies to be fair and transparent about their use of personal data [4]. This has spurred the interest of the research community [3] not only to show examples of unfair treatment by existing algorithms [1], but also to come up with solid measures to evaluate if an algorithm is fair [2,5] and techniques to embed fairness as a constraint in machine learning algorithms.<br><br>Recently proposed methods to produce explanations for decisions made by machine learning models include focus on models for fixed-size data, and in general are not applicable to models involving sequential data. Interpreting sequential models is an inherently harder because of the non-locality introduced by memory and the recurrence properties of such models.<br><br><br>

<div style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration-style:initial;text-decoration-color:initial"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">====================</span><br style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration-style:initial;text-decoration-color:initial">Practical Details</div><div style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration-style:initial;text-decoration-color:initial"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration-style:initial;text-decoration-color:initial;background-color:rgb(255,255,255);float:none;display:inline">====================</span></div><br>Submissions (max. 6 pages plus references in JMLR format) should be submitted to the “Transparency and Interpretability” track of ICGI (<a href="https://easychair.org/conferences/?conf=icgi2018">https://easychair.org/conferences/?conf=icgi2018</a>), before May, 15th 2018. Accepted proposals will be presented at a special session during ICGI 2018 (<a href="http://icgi2018.pwr.edu.pl/">http://icgi2018.pwr.edu.pl/</a>, Wroclaw, Poland; Sept 5-7)</div><div><br>The ICGI Steering Committee intends the special session to spur development of a future competition around interpretable sequence models. The ICGI Steering Committee will invite selected authors of papers presented during the special session to organize a competition on interpretable sequence models, for which we are discussing sponsorship.<br><br><br>

<br style="color:rgb(0,0,0);font-family:Tahoma;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"><span style="color:rgb(0,0,0);font-family:Tahoma;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">====================</span><br style="color:rgb(0,0,0);font-family:Tahoma;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"><span style="color:rgb(0,0,0);font-family:Tahoma;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">Programme Committee</span><br style="color:rgb(0,0,0);font-family:Tahoma;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"><span style="color:rgb(0,0,0);font-family:Tahoma;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">====================</span> </div><div><br></div><div><div>Borja Balle Pigem - Amazon Research</div><div>Leonor Becerra-Bonache - Jean Monnet University</div><div>François Coste - INRIA Rennes</div><div>Rémi Eyraud - LIF Marseille</div><div>Matthias Gallé - Naver Labs Europe</div><div>Jeffrey Heinz - Stony Brooks University</div><div>Olgierd Unold - Wroclaw University of Technology</div><div>Menno van Zaanen - Tilburg University</div><div>Sicco Verwer - Delft University of Technology</div><div>Ryo Yoshinaka - Kyoto University</div></div><div><br></div><div><br></div><div>

<br style="color:rgb(0,0,0);font-family:Tahoma;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"><span style="color:rgb(0,0,0);font-family:Tahoma;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">====================</span><br style="color:rgb(0,0,0);font-family:Tahoma;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"><span style="color:rgb(0,0,0);font-family:Tahoma;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">References</span><br style="color:rgb(0,0,0);font-family:Tahoma;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"><span style="color:rgb(0,0,0);font-family:Tahoma;font-size:small;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">====================</span>

<br></div><div> 
<br><br><br>[1] for examples see for instance <a href="https://fairmlclass.github.io/">https://fairmlclass.github.io/</a><br><br></div><div>[2] Sorelle A. Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. On the (im)possibility of fairness. arXiv:1609.07236, Sept. 23, 2016<br><br></div><div>[3] workshops (<a href="https://www.fatml.org">https://www.fatml.org</a>, <a href="http://home.earthlink.net/~dwaha/research/meetings/ijcai17-xai/">http://home.earthlink.net/~dwaha/research/meetings/ijcai17-xai/</a> ), as well as a long list of smaller events and discussion in ML conferences (<a href="https://www.oii.ox.ac.uk/blog/workshops-on-artificial-intelligence-ethics-and-the-law-what-challenges-what-opportunities/">https://www.oii.ox.ac.uk/blog/workshops-on-artificial-intelligence-ethics-and-the-law-what-challenges-what-opportunities/</a>, <a href="https://nips.cc/Conferences/2017/Schedule?showEvent=8734">https://nips.cc/Conferences/2017/Schedule?showEvent=8734</a>, <a href="https://nips.cc/Conferences/2017/Schedule?showEvent=8744">https://nips.cc/Conferences/2017/Schedule?showEvent=8744</a>)  <br><br>[4] Wachter, Sandra, Brent Mittelstadt, and Luciano Floridi. "Why a right to explanation of automated decision-making does not exist in the general data protection regulation." International Data Privacy Law 7, no. 2 (2017): 76-99.<br><br>[5] Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. "Inherent trade-offs in the fair determination of risk scores." arXiv preprint arXiv:1609.05807 (2016).<br><br><br></div></div></div>