<div dir="ltr"><div style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><div><div dir="ltr"><div><span style="white-space:pre-wrap">Dear all,</span></div><div><span style="white-space:pre-wrap"><br></span></div><div><span style="white-space:pre-wrap">We’re very excited to host <b>Tamara Broderick (MIT)</b> for the first installment of the newly created<b> “I Can’t Believe It’s Not Better!” (ICBINB) virtual seminar series</b>. More details about this series are below.</span></div><div><span style="white-space:pre-wrap"><br></span></div><div><span style="white-space:pre-wrap">The <b>"I Can't Believe It's Not Better!" (ICBINB) monthly online seminar series</b> seeks to shine a light on the "stuck" phase of research. Speakers will tell us about their most beautiful ideas that didn't "work", about when theory didn't match practice, or perhaps just when the going got tough. These talks will let us peek inside the file drawer of unexpected results and peer behind the curtain to see the real story of <i>how real researchers did real research</i>.</span></div><div><span style="white-space:pre-wrap"><b><br></b></span></div><div><span style="white-space:pre-wrap"><b>When: </b>Thursday, February 3rd at 10:00AM (EST).</span></div><div><span style="white-space:pre-wrap">
<b>Where: </b>RSVP for the Zoom link here: </span><a href="https://us02web.zoom.us/meeting/register/tZ0qf-yrqzkqGNTtEu-VQ8l8ECqi2yW8hGu2" style="text-decoration-line:none;color:rgb(41,98,255)">https://us02web.zoom.us/meeting/register/tZ0qf-yrqzkqGNTtEu-VQ8l8ECqi2yW8hGu2</a><span style="white-space:pre-wrap"><br></span><span style="white-space:pre-wrap"><b><br></b></span></div><div><span style="white-space:pre-wrap"><b>Title:</b> <i>An Automatic Finite-Sample Robustness Metric: Can Dropping a Little Data Change Conclusions?</i></span></div><div><span style="white-space:pre-wrap"><b><br></b></span></div><div><span style="white-space:pre-wrap"><b>Abstract:</b> <i>Imagine you've got a bold new idea for ending poverty. To check your intervention, you run a gold-standard randomized controlled trial; that is, you randomly assign individuals in the trial to either receive your intervention or to not receive it. You recruit tens of thousands of participants. You run an entirely standard and well-vetted statistical analysis; you conclude that your intervention works with a p-value < 0.01. You publish your paper in a top venue, and your research makes it into the news! Excited to make the world a better place, you apply your intervention to a new set of people and... it fails to reduce poverty. How can this possibly happen? There seems to be some important disconnect between theory and practice, but what is it? And is there any way you could have been tipped off about the issue when running your original data analysis? In the present work, we observe that if a very small percentage of the original data was instrumental in determining the original conclusion, we might worry that the conclusion could be unstable under new conditions. So we propose a method to assess the sensitivity of data analyses to the removal of a very small fraction of the data set. Analyzing all possible data subsets of a certain size is computationally prohibitive, so we provide an approximation. We call our resulting method the Approximate Maximum Influence Perturbation. Empirics demonstrate that while some (real-life) applications are robust, in others the sign of a treatment effect can be changed by dropping less than 0.1% of the data --- even in simple models and even when p-values are small.</i></span></div><div><span style="white-space:pre-wrap"><b><br></b></span></div><div><span style="white-space:pre-wrap"><b>Bio:</b> <i>Tamara Broderick is an Associate Professor in the Department of Electrical Engineering and Computer Science at MIT. She is a member of the MIT Laboratory for Information and Decision Systems (LIDS), the MIT Statistics and Data Science Center, and the Institute for Data, Systems, and Society (IDSS). She completed her Ph.D. in Statistics at the University of California, Berkeley in 2014. Previously, she received an AB in Mathematics from Princeton University (2007), a Master of Advanced Study for completion of Part III of the Mathematical Tripos from the University of Cambridge (2008), an MPhil by research in Physics from the University of Cambridge (2009), and an MS in Computer Science from the University of California, Berkeley (2013). Her recent research has focused on developing and analyzing models for scalable Bayesian machine learning. She has been awarded selection to the COPSS Leadership Academy (2021), an Early Career Grant (ECG) from the Office of Naval Research (2020), an AISTATS Notable Paper Award (2019), an NSF CAREER Award (2018), a Sloan Research Fellowship (2018), an Army Research Office Young Investigator Program (YIP) award (2017), Google Faculty Research Awards, an Amazon Research Award, the ISBA Lifetime Members Junior Researcher Award, the Savage Award (for an outstanding doctoral dissertation in Bayesian theory and methods), the Evelyn Fix Memorial Medal and Citation (for the Ph.D. student on the Berkeley campus showing the greatest promise in statistical research), the Berkeley Fellowship, an NSF Graduate Research Fellowship, a Marshall Scholarship, and the Phi Beta Kappa Prize (for the graduating Princeton senior with the highest academic average).</i></span></div><div><span style="white-space:pre-wrap"><i><br></i></span></div><div><span style="white-space:pre-wrap"><i>--</i> </span></div><div><span style="white-space:pre-wrap"><br></span></div><div><span style="white-space:pre-wrap"><b>More info:</b> This series is organized by the community that grew out of the ICBINB workshops @ NeurIPS. Our goal as a community is to center unexpected results, push back against "leaderboard-ism", and promote "slow science" in machine learning research. This seminar series will be the first of a number community-building and collaborative initiatives we plan to organize.
For more information and for ways to get involved, please visit us at <a href="http://icbinb.cc/" style="text-decoration-line:none;color:rgb(41,98,255)">http://icbinb.cc/</a>, Tweet to us <a href="https://twitter.com/ICBINBWorkshop" style="text-decoration-line:none;color:rgb(41,98,255)">@ICBINBWorkhop</a>, or email us at <a href="mailto:cant.believe.it.is.not.better@gmail.com" style="text-decoration-line:none;color:rgb(41,98,255)">cant.believe.it.is.not.better@gmail.com</a>.</span></div><div><span style="white-space:pre-wrap"><br></span></div>--<br><div dir="ltr"><div dir="ltr"><div>Best wishes,</div>The ICBINB Organizers<br></div></div></div><div></div><div></div></div></div><div><br></div></div>