Connectionists: CfP: NIPS workshop on Reliable Machine Learning

Jacob Steinhardt jacob.steinhardt at gmail.com
Tue Sep 13 23:22:56 EDT 2016


*Call for PapersReliable Machine Learning in the Wild*Date: December 9th,
2016
Location: Barcelona, Spain (part of the NIPS 2016 workshops)
Submission Deadline: November 1st, 2016
Website: https://sites.google.com/site/wildml2016nips/

How can we build systems that will perform well in the presence of novel,
even adversarial, inputs? What techniques will let us safely build and
deploy autonomous systems on a scale where human monitoring becomes
difficult or infeasible? Answering these questions is critical to
guaranteeing the safety of emerging high stakes applications of AI, such as
self-driving cars and automated surgical assistants.

This workshop will bring together researchers in areas such as human-robot
interaction, security, causal inference, and multi-agent systems in order
to strengthen the field of reliability engineering for machine learning
systems. We are interested in approaches that have the potential to provide
assurances of reliability, especially as systems scale in autonomy and
complexity.

We will focus on five aspects — robustness, awareness, adaptation, value
learning, and monitoring -- that can aid us in designing and deploying
reliable machine learning systems. Some possible questions touching on each
of these categories are given below, though we also welcome submissions
that do not directly fit into these categories.

*Robustness:*  How can we make a system robust to novel or potentially
adversarial inputs? What are ways of handling model mis-specification or
corrupted training data? What can be done if the training data is
potentially a function of system behavior or of other agents in the
environment (e.g. when collecting data on users that respond to changes in
the system and might also behave strategically)?

*Awareness:* How do we make a system aware of its environment and of its
own limitations, so that it can recognize and signal when it is no longer
able to make reliable predictions or decisions? Can it successfully
identify “strange” inputs or situations and take appropriately conservative
actions? How can it detect when changes in the environment have occurred
that require re-training? How can it detect that its model might be
mis-specified or poorly-calibrated?

*Adaptation:* How can machine learning systems detect and adapt to changes
in their environment, especially large changes (e.g. low overlap between
train and test distributions, poor initial model assumptions, or shifts in
the underlying prediction function)? How should an autonomous agent act
when confronting radically new contexts?

*Value Learning:* For systems with complex desiderata, how can we learn a
value function that captures and balances all relevant considerations? How
should a system act given uncertainty about its value function? Can we make
sure that a system reflects the values of the humans who use it?

*Monitoring:* How can we monitor large-scale systems in order to judge if
they are performing well? If things go wrong, what tools can help?

Organizers: Jacob Steinhardt (Stanford), Dylan Hadfield-Menell (Berkeley),
Adrian Weller (Cambridge), David Duvenaud (Toronto), Percy Liang (Stanford)

Sponsors: We gratefully acknowledge support from the Open Philanthropy
Project, the Center for the Study of Existential Risk, and the Leverhulme
Center for the Future of Intelligence.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20160913/690ce2fc/attachment.html>


More information about the Connectionists mailing list