Connectionists: Call for Papers: NIPS 2017 Symposium on Interpretable Machine Learning

Jason Yosinski yosinski at cs.cornell.edu
Tue Sep 19 12:32:08 EDT 2017


=============================

Call for Papers: NIPS 2017 Symposium
Interpretable Machine Learning
Website: http://interpretable.ml
Location: Long Beach, California, USA
Date: December 7, 2017

=============================

Call for Papers:

We invite researchers to submit their recent work on interpretable machine
learning from a wide range of approaches, including (1) methods that are
designed to be more interpretable from the start, such as rule-based
methods, (2) methods that produce insight into existing ML models, and (3)
perspectives either for or against interpretability in general.  Topics of
interest include:

   - Deep learning
   - Kernel, tensor, graph, or probabilistic methods
   - Automatic scientific discovery
   - Safe AI and AI Ethics
   - Causality
   - Social Science
   - Human-computer interaction
   - Quantifying or visualizing interpretability
   - Symbolic regression

Authors are welcome to submit 2-4 page extended abstracts, in the NIPS
style.  Author names do not need to be anonymized. Accepted papers will
have the option of inclusion in the proceedings.  Certain papers will also
be selected to present spotlight talks. Email submissions to
interpretML2017 at gmail.com.

Key Dates:

*Submission Deadline: 20 Oct 2017*Acceptance Notification: 27 Oct 2017
Symposium: 7 Dec 2016

Speakers and Panelists:


   - Kilian Weinberger (Cornell)
   - Jerry Zhu (UW-Madison)
   - Viktoria Krakovna (DeepMind)
   - Bernhard Scholkopf (MPI)
   - Kiri Wagstaff (JPL)
   - Suchi Saria (JHU)
   - Jenn Vaughan (Microsoft)
   - Yann LeCun (NYU)
   - Hanna Wallach (Microsoft)

Organizers:


   - Andrew Gordon Wilson (Cornell)
   - Jason Yosinski  (Uber AI Labs)
   - Patrice Simard (Microsoft)
   - Rich Caruana (Microsoft)
   - William Herlands (CMU)

Workshop Overview:

Complex machine learning models, such as deep neural networks, have
recently achieved outstanding predictive performance in a wide range of
applications, including visual object recognition, speech perception,
language modeling, and information retrieval.  There has since been an
explosion of interest in interpreting the representations learned and
decisions made by these models, with profound implications for research
into explainable ML, causality, safe AI, social science, automatic
scientific discovery, human computer interaction (HCI), crowdsourcing,
machine teaching, and AI ethics. This symposium is designed to broadly
engage the machine learning community on the intersection of these topics
-- tying together many threads which are deeply related but often
considered in isolation.

For example, we may build a complex model to predict levels of crime.
Predictions on their own produce insights, but by interpreting the learned
structure of the model, we can gain more important new insights into the
processes driving crime, enabling us to develop more effective public
policy. Moreover, if we learn that the model is making good predictions by
discovering how the geometry of clusters of crime events affect future
activity, we can use this knowledge to design even more successful
predictive models. Similarly, if we wish to make AI systems deployed on
self-driving cars safe, straightforward black-box models will not suffice,
as we will need methods of understanding their rare but costly mistakes.

The symposium will feature invited talks and two panel discussions.  One of
the panels will have a moderated debate format where arguments are
presented on each side of key topics chosen prior to the symposium, with
the opportunity to follow-up each argument with questions. This format will
encourage an interactive, lively, and rigorous discussion, working towards
the shared goal of making intellectual progress on foundational questions.
During the symposium, we will also feature the launch of a new
Explainability in Machine Learning Challenge, involving the creation of new
benchmarks for motivating the development of interpretable learning
algorithms.




best,
jason


---------------------------
Jason Yosinski    http://yosinski.com/    +1.719.440.1357
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20170919/360a5004/attachment.html>


More information about the Connectionists mailing list