<div dir="ltr"><span id="docs-internal-guid-49d59615-7643-8c1b-ca31-eb6c4805c692"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">CALL FOR PAPERS</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">=========================================================================</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">3rd Workshop on Fairness, Accountability, and Transparency in Machine Learning</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Co-located with the Data Transparency Lab 2016</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">November 18, New York, NY</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent"><a href="http://fatml.org/">http://fatml.org/</a></span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Submission Deadline: September 9, 2016</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">=========================================================================</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">OVERVIEW</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">--------</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">This workshop aims to bring together a growing community of researchers and practitioners concerned with fairness, accountability, and transparency in machine learning. The past few years have seen growing recognition that machine learning raises novel challenges for ensuring non-discrimination, due process, and understandability in decision-making. In particular, policymakers, regulators, and advocates have expressed fears about the potentially discriminatory impact of machine learning, with many calling for further technical research into the dangers of inadvertently encoding bias into automated decisions. At the same time, there is increasing alarm that the complexity of machine learning may reduce the justification for consequential decisions to “the algorithm made me do it.” The goal of this workshop is to provide researchers with a venue to explore how to characterize and address these issues with computationally rigorous methods.  </span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">This year, the workshop is co-located with two other highly related events: the Data Transparency Lab (DTL) Conference and the Workshop on Data and Algorithmic Transparency (DAT). We anticipate that our workshop will consist of a mix of invited talks, invited panels, and contributed talks. We welcome paper submissions that address any issue of fairness, accountability, and transparency related to machine learning, especially those that provide a bridge to empirical studies of the behavior of data-driven systems in the wild, the focus of the DTL and DAT events.</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">TOPICS OF INTEREST</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">------------------</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Fairness:</span></p><ol style="margin-top:0pt;margin-bottom:0pt"><li dir="ltr" style="list-style-type:decimal;font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;background-color:transparent"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Can we develop new computational techniques for discrimination-aware data mining? How should we handle, for example, bias in training data sets?</span></p></li><li dir="ltr" style="list-style-type:decimal;font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;background-color:transparent"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;vertical-align:baseline;white-space:pre-wrap;background-color:transparent">How should we formalize fairness? What does it mean for an algorithm to be fair?</span></p></li><li dir="ltr" style="list-style-type:decimal;font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;background-color:transparent"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Should we look only to the law for definitions of fairness? Are legal definitions sufficient? Can legal definitions even be translated to practical algorithmic contexts?</span></p></li><li dir="ltr" style="list-style-type:decimal;font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;background-color:transparent"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Can we develop definitions of discrimination and disparate impact that move beyond the </span><span style="font-size:14px;color:rgb(37,37,37);vertical-align:baseline;white-space:pre-wrap">Equal Employment Opportunity Commission’s</span><span style="font-size:14.6667px;vertical-align:baseline;white-space:pre-wrap;background-color:transparent"> 80% rule?</span></p></li><li dir="ltr" style="list-style-type:decimal;font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;background-color:transparent"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Who decides what counts as fair when fairness becomes a machine learning objective?</span></p></li><li dir="ltr" style="list-style-type:decimal;font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;background-color:transparent"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Are there any dangers in turning questions of fairness into computational problems?</span></p></li></ol><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Accountability:</span></p><ol style="margin-top:0pt;margin-bottom:0pt" start="7"><li dir="ltr" style="list-style-type:decimal;font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;background-color:transparent"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;vertical-align:baseline;white-space:pre-wrap;background-color:transparent">What would human review entail if models were available for direct inspection?</span></p></li><li dir="ltr" style="list-style-type:decimal;font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;background-color:transparent"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Are there practical methods to test existing algorithms for compliance with a policy?</span></p></li><li dir="ltr" style="list-style-type:decimal;font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;background-color:transparent"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Can we prove that an algorithm behaves in some way without having to reveal the algorithm? Can we achieve accountability without transparency?</span></p></li><li dir="ltr" style="list-style-type:decimal;font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;background-color:transparent"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;vertical-align:baseline;white-space:pre-wrap;background-color:transparent">How can we conduct reliable empirical black-box testing and/or reverse engineer algorithms to test for ethically salient differential treatment?</span></p></li><li dir="ltr" style="list-style-type:decimal;font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;background-color:transparent"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;vertical-align:baseline;white-space:pre-wrap;background-color:transparent">What are the societal implications of autonomous experimentation? How can we manage the risks that such experimentation might pose to users?</span></p></li></ol><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Transparency:</span></p><ol style="margin-top:0pt;margin-bottom:0pt" start="12"><li dir="ltr" style="list-style-type:decimal;font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;background-color:transparent"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;vertical-align:baseline;white-space:pre-wrap;background-color:transparent">How can we develop interpretable machine learning methods that provide ways to manage the complexity of a model and/or generate meaningful explanations?</span></p></li><li dir="ltr" style="list-style-type:decimal;font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;background-color:transparent"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Can we use adversarial conditions to learn about the inner workings of inscrutable algorithms? Can we learn from the ways they fail on edge cases?</span></p></li><li dir="ltr" style="list-style-type:decimal;font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;background-color:transparent"><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;vertical-align:baseline;white-space:pre-wrap;background-color:transparent">How can we use game theory and machine learning to build fully transparent, but robust models using signals that people would face severe costs in trying to manipulate?</span><span style="font-size:14.6667px;vertical-align:baseline;white-space:pre-wrap;background-color:transparent"><br class=""><br class=""></span></p></li></ol><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">PAPER SUBMISSION</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">----------------</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent"><br class=""></span><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Papers are limited to 4 content pages, including figures and tables, and should use a standard 2-column 11pt format; however, an additional fifth page containing only cited references is permitted. Papers must be anonymized for double-blind reviewing. Accepted papers will be made available on the workshop website and should also be posted by the authors to arXiv; however, the workshop's proceedings can be considered non-archival, meaning that contributors are free to publish their work in archival journals or conferences. Accepted papers will be either presented as a talk or poster (to be determined by the workshop organizers).</span><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent"><br class=""></span><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent"><br class=""></span><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Papers should be submitted here:</span><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent"><br class=""></span><a href="https://easychair.org/conferences/?conf=fatml2016" style="text-decoration:none"><span style="font-size:13.3333px;font-family:Verdana;text-decoration:underline;vertical-align:baseline;white-space:pre-wrap">https://easychair.org/conferences/?conf=fatml2016</span></a></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Complete Paper Submissions Due: September 9, 2016, 11:59:59PM EDT</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Notification to Authors: October 7, 2016</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Camera-Ready Papers Due: October 28, 2016</span></p><br><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">RELATED CALL</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">------------</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Authors of especially well developed papers should also consider submitting to a special issue of </span><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);font-style:italic;vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Big Data </span><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">on “Social and Technical Trade-Offs,” which is being guest edited by a number of the workshop organizers: </span><a href="http://www.liebertpub.com/lpages/big-data-cfp-social-and-technical-trade-offs/155/" style="text-decoration:none"><span style="font-size:14.6667px;font-family:Arial;text-decoration:underline;vertical-align:baseline;white-space:pre-wrap;background-color:transparent">http://www.liebertpub.com/lpages/big-data-cfp-social-and-technical-trade-offs/155/</span></a><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent"> </span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Complete Manuscript Submissions Due: September 15, 2016</span></p><br><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">ORGANIZATION</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">------------</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Workshop Organizers:</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Solon Barocas, Microsoft Research NYC</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Sorelle Friedler, Haverford College</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Moritz Hardt, Google</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Joshua Kroll, CloudFlare</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Suresh Venkatasubramanian, University of Utah</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Hanna Wallach, Microsoft Research NYC</span></p><br><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Program Committee:</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Sorelle Friedler, Haverford College, Co-Chair</span><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent"><br class=""></span><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Suresh Venkatasubramanian, University of Utah, Co-Chair</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent"><br class=""></span><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Anupam Datta, Carnegie Mellon University</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Hal Daume, University of Maryland, College Park</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Fernando Diaz, Microsoft Research NYC</span><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent"><br class=""></span><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Krishna Gummadi, MPI-SWS</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Sara Hajian, Eurecat, Technology Center of Catalonia</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Kristian Lum, Human Rights Data Analysis Group</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">David Robinson, Upturn</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Salvatore Ruggieri, Università di Pisa</span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Julia Stoyanovich, Drexel University</span><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent"><br class=""></span><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">Christo Wilson, Northeastern University</span><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent"><br class=""><br class=""></span></p><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent">*** more members may be added ***</span></p><div><span style="font-size:14.6667px;font-family:Arial;color:rgb(0,0,0);vertical-align:baseline;white-space:pre-wrap;background-color:transparent"><br></span></div></span></div>