Fwd: Thesis Proposal - November 5, 2024 - Youngseog Chung - Methods for Calibrated Uncertainty Quantification and Understanding its Utility

Jeff Schneider jeff4 at andrew.cmu.edu
Tue Nov 5 12:12:31 EST 2024


Hi Everyone,

You're invited to see Youngseog's thesis proposal starting at 1pm today!

Jeff.



-------- Forwarded Message --------
Subject: 	Thesis Proposal - November 5, 2024 - Youngseog Chung - Methods 
for Calibrated Uncertainty Quantification and Understanding its Utility
Date: 	Wed, 30 Oct 2024 05:27:41 -0400
From: 	Diane L Stidle <stidle at andrew.cmu.edu>
To: 	ml-seminar at cs.cmu.edu <ML-SEMINAR at CS.CMU.EDU>, jsnoek at google.com



*/Thesis Proposal/*

Date: November 5, 2024
Time: 1:00pm (EDT)
Place: GHC 4405
Speaker: Youngseog Chung

*Thesis Title:*
Methods for Calibrated Uncertainty Quantification and Understanding its 
Utility
*
*
*Abstract:*
As machine learning models have become more capable of dealing with 
complex data, they have been entrusted with an increasing array of 
predictive tasks. However, with growing reliance on model predictions, 
being able to assess whether a given model prediction is reliable has 
become equally important. Uncertainty quantification (UQ) plays a 
critical role in this context by providing a measure of confidence in a 
model's predictions, and the quantified uncertainty is considered 
correct if it is calibrated. In this proposal, I address the problem of 
optimizing for calibration, especially with regression models which 
output a distribution over continuous-valued outputs. In my initial 
work, I propose a collection of methods and techniques to train a 
quantile model end-to-end with differentiable loss functions that 
optimize directly for the calibration of the predictive quantiles. This 
work falls under a class of pre-hoc methods which aim to improve 
calibration during the training of the model and distinguishes itself 
from the relatively richer line of work in post-hoc calibration, which 
aim to calibrate a pre-trained predictive model. Afterwards, I introduce 
a method to feasibly extend the notion of calibration to 
multi-dimensional distributions and describe a post-hoc calibration (or 
recalibration) algorithm. I further discuss how distributional 
predictions are utilized in applications such as decision-making tasks 
or model-based reinforcement learning and point out that each 
application setting requires different qualities for the distributional 
prediction.
In light of this observation, I propose several research directions 
which study applications of using distributional predictions. In 
particular, I propose re-investigating proper scoring rules as a tool 
for eliciting good/useful behavior from distributional predictions in a 
pre-hoc manner.
*
Thesis Committee:*
Jeff Schneider (Chair)
Aarti Singh
Zico Kolter
Jasper Snoek (Google Deepmind)
*
*
*Link to the draft document: *
https://youngseogchung.github.io/docs/thesis_proposal.pdf

*Zoom meeting link:*
https://cmu.zoom.us/my/youngseog.chung



More information about the Autonlab-users mailing list