Fwd: ML/Duolingo Seminar - Michael Oberst

Artur Dubrawski awd at cs.cmu.edu
Wed Oct 26 13:17:14 EDT 2022


this talk could be of interest to all of us who work in the ML4HC space
and who are curious about causality in the context of training predictive
models


---------- Forwarded message ---------
From: Sharon Cavlovich <sharonw at cs.cmu.edu>
Date: Wed, Oct 26, 2022 at 12:37 PM
Subject: ML/Duolingo Seminar - Michael Oberst
To: <ml-seminar at cs.cmu.edu>


Please join us for a ML/Duolingo Seminar!

Tuesday, Nov. 1, 2022
NSH 4305
10:30am

Michael Oberst, PhD Candidate, MIT

Title: What is the role of causality in reliable prediction?

Abstract: How should we incorporate causal knowledge into the development
of predictive models in high-risk domains like healthcare?  Rather than
attempting to learn "causal" models, I present an alternative viewpoint:
Partial causal knowledge can be used to anticipate how model performance
will change in novel (but plausible) scenarios, and can be used as a guide
for developing reliable models.

First, I will discuss my work on learning linear predictors that are
worst-case optimal under a set of user-specified interventions on
unobserved variables (e.g., moving from a hospital with high-income
patients to one with lower-income patients). This work assumes the
existence of noisy proxies for those background variables at training time,
and an underlying linear causal model over all variables. A key insight is
that the optimal predictor is not necessarily a "causal" predictor, but
depends on the scale (and direction) of plausible interventions.

Second, I will demonstrate how similar ideas can be extended to more
general settings, including computer vision. Here, I will discuss work on
evaluating the worst-case performance of predictive models under a set of
user-specified, causally interpretable changes in distribution (e.g., a
change in X-ray scanning policies). In contrast to work that considers a
worst-case over subpopulations or distributions in an f-divergence ball, we
consider parametric shifts in the distribution of a subset of variables.
This allows us to further constrain the space of plausible shifts, and in
some cases directly interpret the worst-case shift to build intuition for
model vulnerabilities.

This talk is based on joint work with Nikolaj Thams, David Sontag, and
Jonas Peters.
(https://arxiv.org/abs/2103.02477, https://arxiv.org/abs/2205.15947)

Bio: Michael Oberst is a PhD Candidate in EECS at MIT, advised by David
Sontag. His research lies at the intersection of causality, machine
learning, and healthcare, with an emphasis on improving the reliability of
both causal inference and prediction models. His work has been published at
a range of machine learning venues (NeurIPS / ICML / AISTATS / KDD),
including work with clinical collaborators from NYU Langone, Beth Israel
Deaconess Medical Center, and Mass General Brigham. He has also worked on
clinical applications of machine learning, including work on learning
effective antibiotic treatment policies (published in Science Translational
Medicine). He earned his undergraduate degree in Statistics at Harvard.

-- 

-- 
Sharon Cavlovich
Senior Department Administrative Assistant | Machine Learning Department

Carnegie Mellon University
5000 Forbes Avenue | Gates Hillman Complex 8215
Pittsburgh, PA 15213
412.268.5196 (office) | 412.268.3431 (fax)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/autonlab-users/attachments/20221026/c947417a/attachment.html>


More information about the Autonlab-users mailing list