[CMU AI Seminar] October 31 at 12pm (GHC 6115 & Zoom) -- Abhin Shah (MIT) -- Group Fairness with Uncertainty in Sensitive Attributes -- AI Seminar sponsored by SambaNova Systems

Asher Trockman ashert at cs.cmu.edu
Sun Oct 29 17:01:09 EDT 2023


Dear all,

We look forward to seeing you *this Tuesday (10/31)* from *1**2:00-1:00 PM
(U.S. Eastern time)* for the next talk of this semester's *CMU AI Seminar*,
sponsored by SambaNova Systems <https://sambanova.ai/>. The seminar will be
held in GHC 6115 *with pizza provided *and will be streamed on Zoom. (Note:
the speaker will be virtual, but we will stream the talk in the room.)

To learn more about the seminar series or to see the future schedule,
please visit the seminar website <http://www.cs.cmu.edu/~aiseminar/>.

On this Tuesday (10/31), *Abhin Shah* (MIT) will be giving a talk
titled *"**Group
Fairness with Uncertainty in Sensitive Attributes**"*.

*Title*: Group Fairness with Uncertainty in Sensitive Attributes

*Talk Abstract*: Learning a fair predictive model is crucial to mitigate
biased decisions against minority groups in high-stakes applications. A
common approach to learn such a model involves solving an optimization
problem that maximizes the predictive power of the model under an
appropriate group fairness constraint. However, in practice, sensitive
attributes are often missing or noisy resulting in uncertainty. We
demonstrate that solely enforcing fairness constraints on uncertain
sensitive attributes can fall significantly short in achieving the level of
fairness of models trained without uncertainty. To overcome this
limitation, we propose a bootstrap-based algorithm that achieves better
levels of fairness despite the uncertainty in sensitive attributes. The
algorithm is guided by a Gaussian analysis for the independence notion of
fairness where we propose a robust quadratically constrained quadratic
problem to ensure a strict fairness guarantee with uncertain sensitive
attributes. Our algorithm is applicable to both discrete and continuous
sensitive attributes and is effective in real-world classification and
regression tasks for various group fairness notions, e.g., independence and
separation.

*Speaker Bio:* Abhin Shah is a final-year Ph.D. student in EECS department
at MIT advised by Prof. Devavrat Shah and Prof. Greg Wornell. He is a
recipient of MIT’s Jacobs Presidential Fellowship. He interned at Google
Research in 2021 and at IBM Research in 2020. Prior to MIT, he graduated
from IIT Bombay with a Bachelor’s degree in Electrical Engineering. His
research interests include theoretical and applied aspects of trustworthy
machine learning with a focus on causality and fairness.

*In person: *GHC 6115
*Zoom Link*:
https://cmu.zoom.us/j/99510233317?pwd=ZGx4aExNZ1FNaGY4SHI3Qlh0YjNWUT09

Thanks,
Asher Trockman
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/ai-seminar-announce/attachments/20231029/9425241d/attachment.html>


More information about the ai-seminar-announce mailing list