<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>*** apologies for cross-posting ***<br>
<br>
<b>FINAL CALL FOR PAPERS</b><br>
======================<br>
<br>
<b>NIPS 2017 Workshop on Transparent and Interpretable Machine
Learning in Safety Critical Environments</b> <br>
<a class="moz-txt-link-freetext" href="https://sites.google.com/view/timl-nips2017">https://sites.google.com/view/timl-nips2017</a><br>
<br>
Friday, December 8, 8:00am-6:30pm <br>
Long Beach Convention Center, Long Beach, CA, USA<br>
===========================================<br>
<br>
<b>IMPORTANT DATES</b><br>
Submission deadline: 29th of October, 2017 <br>
Acceptance notification: 10th of November, 2017<br>
Camera ready due: 26th of November, 2017<br>
NOTE: beware of registration limitations. Main conference already
sold out although workshop registrations still available.<br>
<br>
<b>SUBMISSION</b><br>
Through CMT system: see workshop site above<br>
=======================================<br>
<br>
<b>OVERVIEW</b><br>
The use of machine learning has become pervasive in our society,
from specialized scientific data analysis to industry intelligence
and practical applications with a direct impact in the public
domain. This impact involves different social issues including
privacy, ethics, liability and accountability.<br>
In the way of example, European Union legislation, resulting in
the General Data Protection Regulation (trans-national law) passed
in early 2016, will go into effect in April 2018. It includes an
article on "Automated individual decision making, including
profiling" that, in fact, establishes a policy on the right of
citizens to receive an explanation for algorithmic decisions that
may affect them. This could jeopardize the use of any machine
learning method that is not comprehensible and interpretable at
least in applications that affect the individual.<br>
This situation may affect safety critical environments in
particular and puts model interpretability at the forefront as a
key concern for the machine learning community. In such context,
this workshop aims to discuss the use of machine learning in
safety critical environments, with special emphasis on three main
application domains:<br>
- Healthcare<br>
Decision making (diagnosis, prognosis) in life-threatening
conditions<br>
Integration of medical experts knowledge in machine learning-based
medical decision support systems<br>
Critical care and intensive care units<br>
- Autonomous systems<br>
Mobile robots, including autonomous vehicles, in human-crowded
environments. <br>
Human safety when collaborating with industrial robots.<br>
Ethics in robotics and responsible robotics<br>
- Complainants and liability in data driven industries<br>
Prevent unintended and harmful behaviour in machine learning
systems<br>
Machine learning and the right to an explanation in algorithmic
decisions<br>
Privacy and anonymity vs. inte<br>
<br>
We encourage submissions of papers on machine learning
applications in safety critical domains, with a focus on
healthcare and biomedicine. Research topics of interest include,
but are not restricted to the following list:<br>
- Feature extraction/selection for more interpretable models<br>
- Reinforcement learning and safety in AI<br>
- Interpretability of neural network architectures<br>
- Learning from adversarial examples<br>
- Transparency and its impact<br>
- Trust in decision making<br>
- Integration of medical experts knowledge in machine
learning-based medical decision support systems<br>
- Decision making in critical care and intensive care units<br>
- Human safety in machine learning systems<br>
- Ethics in robotics<br>
- Privacy and anonymity vs. interpretability in automated
individual decision making<br>
- Interactive visualisation and model interpretabilityrpretability
in automated individual decision making<br>
<br>
<b>ORGANIZERS</b><br>
Alessandra Tosi, Mind Foundry (UK)<br>
Alfredo Vellido, Universitat Politècnica de Catalunya, UPC
BarcelonaTech (Spain)<br>
Mauricio Álvarez, University of Sheffield (UK)<br>
<br>
<b>SPEAKERS AND PANELLISTS</b><b><br>
</b>DARIO AMODEI - Research Scientist, OpenAI<br>
FINALE DOSHI-VELEZ - Assistant Professor of Computer Science,
Harvard<br>
ANCA DRAGAN - Assistant Professor, UC Berkeley<br>
BARBARA HAMMER - Professor at CITEC Centre of Excellence,
Bielefeld University <br>
SUCHI SARIA - Assistant Professor, Johns Hopkins University<br>
ADRIAN WELLER - Computational and Biological Learning Lab,
University of Cambridge and Alan Turing Institute.<br>
</p>
</body>
</html>