Connectionists: ICBINB Monthly Seminar Series Kick Off! Tamara Broderick: Feb 3rd 10am EST

Francisco J. Rodríguez Ruiz franrruiz87 at gmail.com
Fri Jan 28 03:52:51 EST 2022


Dear all,

We’re very excited to host *Tamara Broderick (MIT)* for the first
installment of the newly created* “I Can’t Believe It’s Not Better!”
(ICBINB) virtual seminar series*. More details about this series are below.

The *"I Can't Believe It's Not Better!" (ICBINB) monthly online seminar
series* seeks to shine a light on the "stuck" phase of research. Speakers
will tell us about their most beautiful ideas that didn't "work", about
when theory didn't match practice, or perhaps just when the going got
tough. These talks will let us peek inside the file drawer of unexpected
results and peer behind the curtain to see the real story of *how real
researchers did real research*.

*When: *Thursday, February 3rd at 10:00AM (EST).
*Where: *RSVP for the Zoom link here:
https://us02web.zoom.us/meeting/register/tZ0qf-yrqzkqGNTtEu-VQ8l8ECqi2yW8hGu2

*Title:* *An Automatic Finite-Sample Robustness Metric: Can Dropping a
Little Data Change Conclusions?*

*Abstract:* *Imagine you've got a bold new idea for ending poverty. To
check your intervention, you run a gold-standard randomized controlled
trial; that is, you randomly assign individuals in the trial to either
receive your intervention or to not receive it. You recruit tens of
thousands of participants. You run an entirely standard and well-vetted
statistical analysis; you conclude that your intervention works with a
p-value < 0.01. You publish your paper in a top venue, and your research
makes it into the news! Excited to make the world a better place, you apply
your intervention to a new set of people and... it fails to reduce poverty.
How can this possibly happen? There seems to be some important disconnect
between theory and practice, but what is it? And is there any way you could
have been tipped off about the issue when running your original data
analysis? In the present work, we observe that if a very small percentage
of the original data was instrumental in determining the original
conclusion, we might worry that the conclusion could be unstable under new
conditions. So we propose a method to assess the sensitivity of data
analyses to the removal of a very small fraction of the data set. Analyzing
all possible data subsets of a certain size is computationally prohibitive,
so we provide an approximation. We call our resulting method the
Approximate Maximum Influence Perturbation. Empirics demonstrate that while
some (real-life) applications are robust, in others the sign of a treatment
effect can be changed by dropping less than 0.1% of the data --- even in
simple models and even when p-values are small.*

*Bio:* *Tamara Broderick is an Associate Professor in the Department of
Electrical Engineering and Computer Science at MIT. She is a member of the
MIT Laboratory for Information and Decision Systems (LIDS), the MIT
Statistics and Data Science Center, and the Institute for Data, Systems,
and Society (IDSS). She completed her Ph.D. in Statistics at the University
of California, Berkeley in 2014. Previously, she received an AB in
Mathematics from Princeton University (2007), a Master of Advanced Study
for completion of Part III of the Mathematical Tripos from the University
of Cambridge (2008), an MPhil by research in Physics from the University of
Cambridge (2009), and an MS in Computer Science from the University of
California, Berkeley (2013). Her recent research has focused on developing
and analyzing models for scalable Bayesian machine learning. She has been
awarded selection to the COPSS Leadership Academy (2021), an Early Career
Grant (ECG) from the Office of Naval Research (2020), an AISTATS Notable
Paper Award (2019), an NSF CAREER Award (2018), a Sloan Research Fellowship
(2018), an Army Research Office Young Investigator Program (YIP) award
(2017), Google Faculty Research Awards, an Amazon Research Award, the ISBA
Lifetime Members Junior Researcher Award, the Savage Award (for an
outstanding doctoral dissertation in Bayesian theory and methods), the
Evelyn Fix Memorial Medal and Citation (for the Ph.D. student on the
Berkeley campus showing the greatest promise in statistical research), the
Berkeley Fellowship, an NSF Graduate Research Fellowship, a Marshall
Scholarship, and the Phi Beta Kappa Prize (for the graduating Princeton
senior with the highest academic average).*

*--*

*More info:* This series is organized by the community that grew out of the
ICBINB workshops @ NeurIPS. Our goal as a community is to center unexpected
results, push back against "leaderboard-ism", and promote "slow science" in
machine learning research. This seminar series will be the first of a
number community-building and collaborative initiatives we plan to
organize. For more information and for ways to get involved, please visit
us at http://icbinb.cc/, Tweet to us @ICBINBWorkhop
<https://twitter.com/ICBINBWorkshop>, or email us at
cant.believe.it.is.not.better at gmail.com.

--
Best wishes,
The ICBINB Organizers
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220128/0f60a405/attachment.html>


More information about the Connectionists mailing list