Connectionists: CfP: Reliable Machine Learning in the Wild at ICML 2017, deadline 16 June or 17 July

Adrian Weller aw665 at cam.ac.uk
Thu Jun 8 17:12:43 EDT 2017


Call for Papers for ICML 2017 Workshop on Reliable Machine Learning in the
Wild, please forward to others who may have interest.

 

Workshop website  <https://sites.google.com/site/wildml2017icml/>
https://sites.google.com/site/wildml2017icml/ 

 

When can we trust that a system that has performed well in the past will
continue to do so in the future? Designing systems that are reliable in the
wild is essential for high stakes applications such as self-driving cars and
automated surgical assistants. This workshop aims to bring together
researchers in diverse areas such as reinforcement learning, human-robot
interaction, game theory, cognitive science, and security to further the
field of reliability in machine learning. We will focus on three aspects -
robustness (to adversaries, distributional shift, model misspecification,
corrupted data); awareness (of when a change has occurred, when the model
might be miscalibrated, etc.);and  adaptation (to new situations or
objectives). We aim to consider each of these in the context of the complex
human factors that impact the successful application or meaningful
monitoring of any artificial intelligence technology. Together, these will
aid us in designing and deploying reliable machine learning systems.

 

We are seeking submissions that deal with the challenges of reliably applied
machine learning techniques in the real world. Some possible questions
touching on each of these categories are given below, though we also welcome
submissions that do not directly fit into these categories.

 

*	Robustness: How can we make a system robust to novel or potentially
adversarial inputs? What are ways of handling model mis-specification or
corrupted training data? What can be done if the training data is
potentially a function of system behavior or of other agents in the
environment (e.g. when collecting data on users that respond to changes in
the system and might also behave strategically)?
*	Awareness: How do we make a system aware of its environment and of
its own limitations, so that it can recognize and signal when it is no
longer able to make reliable predictions or decisions? Can it successfully
identify "strange" inputs or situations and take appropriately conservative
actions? How can it detect when changes in the environment have occurred
that require re-training? How can it detect that its model might be
mis-specified or poorly-calibrated?
*	Adaptation: How can machine learning systems detect and adapt to
changes in their environment, especially large changes (e.g. low overlap
between train and test distributions, poor initial model assumptions, or
shifts in the underlying prediction function)? How should an autonomous
agent act when confronting radically new contexts?
*	Monitoring: How can we monitor large-scale systems in order to judge
if they are performing well? If things go wrong, what tools can help?
*	Value Alignment: For systems with complex desiderata, how can we
learn a value function that captures and balances all relevant
considerations? How should a system act given uncertainty about its value
function? Can we make sure that a system reflects the values of the humans
who use it?
*	Reward Hacking: How can we ensure that the objective of a system is
immune to reward hacking? Reward hacking is a way that the system can attain
high reward that was unintended by the system designer. For example see
https://blog.openai.com/faulty-reward-functions/
*	Human Factors: Actual humans will be interacting and adapting to
these systems when they are deployed. How do properties of humans affect the
guarantees of performance that the system has? What if the humans are
suboptimal or even adversarial?

 

How to submit

Papers submitted to the workshop should be up to four pages long excluding
references and in
<https://2017.icml.cc/Conferences/2017/StyleAuthorInstructions>  ICML 2017
format. They should be submitted via Easychair at
<https://easychair.org/conferences/?conf=rmlw17>
https://easychair.org/conferences/?conf=rmlw17 . As the review process is
not blind, authors can reveal their identity in their submissions. Accepted
submissions will be presented as posters or talks.

 

We will accept submissions at two deadlines. One earlier deadline, with an
earlier acceptance notification, and one later one. Our goal is to allow for
late submission to the extent that we can, while still allowing some people
to get early confirmation of paper acceptance, which they might need in
order to arrange travel in time.

 

Important Dates:

Submission deadline 1: 16 June 2017

Acceptance notification 1: 1 July 2017

Submission deadline 2: 17 July 2017

Acceptance notification 2: 31 July 2017

Final camera-ready versions of accepted papers: 5 August 2017

Workshop: 11 August 2017

 

Thank you,

Dylan, Jacob, Smitha and Adrian

 

----------------------------------------------

Adrian Weller

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20170608/ff2e8085/attachment.html>


More information about the Connectionists mailing list