IT IS TOMORROW: RI Ph.D. Thesis Defense: Jack Good

Artur Dubrawski awd at cs.cmu.edu
Tue Oct 22 10:24:13 EDT 2024


Please make sure to witness the transformation of Jack Good to Doctor Good
tomorrow!

Artur

On Mon, Oct 14, 2024 at 11:24 AM Artur Dubrawski <awd at cs.cmu.edu> wrote:

> Please mark your calendars to attend this transformative event!
>
> Cheers
> Artur
>
> ---------- Forwarded message ---------
> From: Suzanne Muth <lyonsmuth at cmu.edu>
> Date: Mon, Oct 14, 2024 at 11:06 AM
> Subject: RI Ph.D. Thesis Defense: Jack Good
> To: RI People <ri-people at andrew.cmu.edu>
>
>
> *Date:* 23 October 2024
> *Time:* 3:30 p.m. (ET)
> *Location:* GHC 6501
> *Zoom Link:*
> https://cmu.zoom.us/j/98137707124?pwd=g1OvJBlbgfZFtLQm3tIaEWos9eFhcZ.1
> *Type:* Ph.D. Thesis Defense
> *Who:* Jack Good
> *Title:* Trustworthy Learning using Uncertain Interpretation of Data
>
> *Abstract:*
> Motivated by the potential of Artificial Intelligence (AI) in high-cost
> and safety-critical
> applications, and recently also by the increasing presence of AI in our
> everyday lives, Trustworthy AI has grown in prominence as a broad area of
> research encompassing topics such as interpretability, robustness,
> verifiable safety, fairness, privacy, accountability, and more. This has
> created a tension between simple, transparent models with inherent
> trust-related benefits and complex, black-box models with unparalleled
> performance on many tasks. Towards closing this gap, we propose and study
> an uncertain interpretation of numerical data and apply it to tree-based
> models, resulting in a novel kind of fuzzy decision tree called Kernel
> Density Decision Trees (KDDTs) with improved performance, enhanced
> trustworthy qualities, and increased utility, enabling the use of these
> trees in broader applications. We group the contributions of this thesis
> into three pillars.
>
> The first pillar is robustness and verification. The uncertain
> interpretation, by accounting for uncertainty in the data, and more
> generally as a kind of regularization on the function represented by a
> model, can improve the model with respect to various notions of robustness.
> We demonstrate its ability to improve robustness to noisy features and
> noisy labels, both of which are common in real-world data. Next, we show
> how efficiently verifiable adversarial robustness is achievable through the
> theory of randomized smoothing. Finally, we discuss the related topic of
> verification and propose the first verification algorithm for fuzzy
> decision trees.
>
> The second pillar is interpretability. While decision trees are widely
> considered to be interpretable, good performance from tree-based models is
> often limited to tabular data and demands both feature engineering, which
> increases design effort, and ensemble methods, which severely diminish
> interpretability compared to single-tree models. By leveraging the
> efficient fitting and differentiability of KDDTs, we propose a system of
> learning parameterized feature transformations for decision trees. By
> choosing interpretable feature classes and applying sparsity
> regularization,we can obtain compact single-tree models with competitive
> performance. We demonstrate application to tabular, time series, and simple
> image data.
>
> The third pillar is pragmatic advancements. Semi-supervised Learning (SSL)
> is motivated by the expense of labeling and learns from a mix of labeled
> and unlabeled data. SSL for trees is generally limited to black-box wrapper
> methods, for which trees are not well-suited. We propose as an alternative
> a novel intrinsic SSL method based on our uncertain interpretation of data.
> Federated Learning (FL) is motivated by data sharing limitations and learns
> from distributed data by communicating models. We introduce a new FL
> algorithm based on function space regularization, which borrows concepts
> and methods from our formalism of uncertain interpretation. Unlike prior FL
> methods, it supports non-parametric models and has convergence guarantees
> under mild assumptions. Finally, we show how our FL algorithm also provides
> a simple utility for ensemble merging.
>
> *Thesis Committee Members:*
> Artur Dubrawski, Chair
> Jeff Schneider
> Tom Mitchell
> Gilles Clermont, University of Pittsburgh
>
> A draft of the thesis defense document is available here
> <https://drive.google.com/file/d/1q2Iy7KGQ8LvgrII71d1kLUMXnBKU31BT/view?usp=sharing>
> .
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/autonlab-users/attachments/20241022/7c5b8250/attachment.html>


More information about the Autonlab-users mailing list