[CMU AI Seminar] Apr 6 at 12pm (Zoom) -- Elan Rosenfeld (CMU MLD) -- The Risks of Invariant Risk Minimization -- AI Seminar sponsored by Fortive

Shaojie Bai shaojieb at andrew.cmu.edu
Mon Apr 5 13:26:59 EDT 2021


Dear all,

*NOTE/UPDATE*: This seminar is tomorrow at *1pm* (due to a faculty job talk
at 12pm).

Just a reminder that the CMU AI Seminar <http://www.cs.cmu.edu/~aiseminar/> is
tomorrow *1pm-2pm*:
https://cmu.zoom.us/j/96139997861?pwd=ZlMrUUZaWXY0Sm9mai9ZdjE4QXNDQT09.

Elan Rosenfeld (CMU MLD) will be talking about his latest work on invariant
risk minimization (IRM).

Thanks,
Shaojie

On Tue, Mar 30, 2021 at 4:14 PM Shaojie Bai <shaojieb at andrew.cmu.edu> wrote:

> Dear all,
>
> We look forward to seeing you *next Tuesday (4/6)* from *1**2:00-1:00 PM
> (U.S. Eastern time)* for the next talk of our *CMU AI seminar*, sponsored
> by Fortive <https://careers.fortive.com/>.
>
> To learn more about the seminar series or see the future schedule, please
> visit the seminar website <http://www.cs.cmu.edu/~aiseminar/>.
> <http://www.cs.cmu.edu/~aiseminar/>
>
> On 4/6, *Elan Rosenfeld* (CMU MLD) will be giving a talk on "*The Risks
> of Invariant Risk Minimization*."
>
> *Title*: The Risks of Invariant Risk Minimization
>
> *Talk Abstract*: Invariant feature learning has become a popular
> alternative to Empirical Risk Minimization as practitioners recognize the
> need to ignore features which may be misleading at test time in order to
> improve out-of-distribution generalization. Early results in this area
> leverage variation across environments to provably identify the features
> which are directly causal with respect to the target variable. More recent
> work attempts to use this technique for deep learning, frequently with no
> formal guarantees of an algorithm's ability to uncover the correct
> features. Most notably, the seminal work introducing Invariant Risk
> Minimization gave a loose bound for the linear setting and no results at
> all for non-linearity; despite this, a large number of variations have been
> suggested. In this talk, I'll introduce a formal latent variable model
> which encodes the primary assumptions made by these works. I'll then give
> the first characterization of the optimal solution to the IRM objective,
> deriving the exact number of environments needed for the solution to
> generalize in the linear case. Finally, I'll present the first analysis of
> IRM when the observed data is a non-linear function of the latent
> variables: in particular, we show that IRM can fail catastrophically when
> the test distribution is even moderately different from the training
> distribution - this is exactly the problem that IRM was intended to solve.
> These results easily generalize to all recent variations on IRM,
> demonstrating that these works on invariant feature learning fundamentally
> do not improve over standard ERM. This talk is based on work with Pradeep
> Ravikumar and Andrej Risteski, to appear at ICLR 2021.
>
> *Speaker Bio*: Elan Rosenfeld is a PhD student in the Machine Learning
> Department at CMU, advised by Andrej Risteski and Pradeep Ravikumar. He is
> interested in theoretical foundations of machine learning, with a
> particular focus on robust learning, representation learning and
> out-of-distribution generalization. Elan completed his undergraduate
> degrees in Computer Science and Statistics & Machine Learning at CMU, where
> his senior thesis on human-usable password schemas was advised by Manuel
> Blum and Santosh Vempala.
>
> *Zoom Link*:
> https://cmu.zoom.us/j/96139997861?pwd=ZlMrUUZaWXY0Sm9mai9ZdjE4QXNDQT09
>
>
> Thanks,
> Shaojie Bai (MLD)
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/ai-seminar-announce/attachments/20210405/945f41a6/attachment.html>


More information about the ai-seminar-announce mailing list