<div dir="ltr"><b>Call for Papers<br>Reliable Machine Learning in the Wild<br></b>Date: December 9th, 2016<br>Location: Barcelona, Spain (part of the NIPS 2016 workshops)<br>Submission Deadline: November 1st, 2016<br>Website: <a target="_blank" href="https://sites.google.com/site/wildml2016nips/">https://sites.google.com/site/<wbr>wildml2016nips/</a><br><br>How
can we build systems that will perform well in the presence of novel,
even adversarial, inputs? What techniques will let us safely build and
deploy autonomous systems on a scale where human monitoring becomes
difficult or infeasible? Answering these questions is critical to
guaranteeing the safety of emerging high stakes applications of AI, such
as self-driving cars and automated surgical assistants.<br><br>This
workshop will bring together researchers in areas such as human-robot
interaction, security, causal inference, and multi-agent systems in
order to strengthen the field of reliability engineering for machine
learning systems. We are interested in approaches that have the
potential to provide assurances of reliability, especially as systems
scale in autonomy and complexity.<br><br>We will focus on five aspects —
robustness, awareness, adaptation, value learning, and monitoring --
that can aid us in designing and deploying reliable machine learning
systems. Some possible questions touching on each of these categories
are given below, though we also welcome submissions that do not directly
fit into these categories.<br><br><b>Robustness:</b> How can we make a
system robust to novel or potentially adversarial inputs? What are ways
of handling model mis-specification or corrupted training data? What
can be done if the training data is potentially a function of system
behavior or of other agents in the environment (e.g. when collecting
data on users that respond to changes in the system and might also
behave strategically)?<br><br><b>Awareness:</b> How do we make a system
aware of its environment and of its own limitations, so that it can
recognize and signal when it is no longer able to make reliable
predictions or decisions? Can it successfully identify “strange” inputs
or situations and take appropriately conservative actions? How can it
detect when changes in the environment have occurred that require
re-training? How can it detect that its model might be mis-specified or
poorly-calibrated?<br><br><b>Adaptation:</b> How can machine learning
systems detect and adapt to changes in their environment, especially
large changes (e.g. low overlap between train and test distributions,
poor initial model assumptions, or shifts in the underlying prediction
function)? How should an autonomous agent act when confronting radically
new contexts?<br><br><b>Value Learning:</b> For systems with complex
desiderata, how can we learn a value function that captures and balances
all relevant considerations? How should a system act given uncertainty
about its value function? Can we make sure that a system reflects the
values of the humans who use it?<br><br><b>Monitoring:</b> How can we monitor large-scale systems in order to judge if they are performing well? If things go wrong, what tools can help?<br><br>Organizers:
Jacob Steinhardt (Stanford), Dylan Hadfield-Menell (Berkeley), Adrian
Weller (Cambridge), David Duvenaud (Toronto), Percy Liang (Stanford)<br><br>Sponsors:
We gratefully acknowledge support from the Open Philanthropy Project,
the Center for the Study of Existential Risk, and the Leverhulme Center
for the Future of Intelligence.</div>