Fwd: Seminar Detail - Seminar Tracker

Artur Dubrawski awd at cs.cmu.edu
Mon Feb 13 15:36:43 EST 2023


Maria is back!

Artur

---------- Forwarded message ---------
From: Martin Gaynor <mgaynor at andrew.cmu.edu>
Date: Mon, Feb 13, 2023 at 3:35 PM
Subject: Seminar Detail - Seminar Tracker
To: heinz-phd at lists. edu <heinz-phd at lists.andrew.cmu.edu>, <
heinz-all-faculty at lists.andrew.cmu.edu>


FYI our terrific PhD alum Maria DeArtega, who some of you may recall, is
giving a talk over at Tepper this Friday. Best, Marty


https://seminartracker.tepper.cmu.edu/

Maria De-Arteaga

BT/IS Seminar - A Case for Humans-in-the-Loop: Decisions in the Presence of
Misestimated Algorithmic Scores

Business Technology
February 17, 2023 at 12:00 PM EST (local) || Duration: 60 minutes
Location: Virtual, Meeting Link: (Virtual)

UT Austin

The increased use of machine learning to assist with decision-making in
high-stakes domains has been met with both enthusiasm and concern. One
source of ongoing debate is the effect and value of decision makers'
discretionary power to override algorithmic recommendations. In this paper,
we study the adoption of an algorithmic tool used to help with decisions in
child maltreatment hotline screenings. By taking advantage of an
implementation glitch, we investigate corrective overrides: whether
decision makers are more likely to override algorithmic recommendations
when the tool misestimates the risk score shown to call workers. We find
that, after the deployment of the tool, decisions became better aligned
with algorithmic assessments, but human adherence to the tool's
recommendation was less likely when the displayed score was misestimated as
a result of the glitch. Then, analyzing the effect of adoption and
overrides on racial and socioeconomic disproportionalities, we find that
the deployment of the tool did not affect disproportionalities with respect
to the pre-deployment period. We also observe that
the disproportionalities resulting from algorithmic-informed decisions
were substantially smaller than those associated with the algorithm in
isolation. Together, these results make a case for the value of humans
in-the-loop, showing that in high-stakes contexts, human discretionary
power can mitigate the risks of algorithmic errors and reduce disparities.
Paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4050125

Additional Notes: Meeting ID: 951 4541 3358 Passcode: 294771
If you have any questions, please contact Phil Conley at
pconley at andrew.cmu.edu (412) 268-6212 (Carnegie Mellon University).




_______________________________________________
Heinz-all-faculty mailing list
Heinz-all-faculty at lists.andrew.cmu.edu
https://lists.andrew.cmu.edu/mailman/listinfo/heinz-all-faculty
_______________________________________________
Heinz-affiliate-faculty mailing list
Heinz-affiliate-faculty at lists.andrew.cmu.edu
https://lists.andrew.cmu.edu/mailman/listinfo/heinz-affiliate-faculty
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/autonlab-users/attachments/20230213/933ec3ff/attachment.html>


More information about the Autonlab-users mailing list