[CMU AI Seminar] Special! October 25 at *10am* (GHC 6115 & Zoom) -- Sanae Lotfi (NYU) -- Are the Marginal Likelihood and PAC-Bayes Bounds the right proxies for Generalization? -- AI Seminar sponsored by SambaNova Systems

Asher Trockman ashert at cs.cmu.edu
Sun Oct 22 22:51:55 EDT 2023


Dear all,

We look forward to seeing you *this Wednesday (10/25)* from *10**:00-11:00
AM (U.S. Eastern time)* for a special installment of this semester's
*CMU AI Seminar*, sponsored by SambaNova Systems <https://sambanova.ai/>.
The seminar will be held in GHC 6115 and will be streamed on Zoom. *(Note
the earlier time! ⏰)*

To learn more about the seminar series or to see the future schedule,
please visit the seminar website <http://www.cs.cmu.edu/~aiseminar/>.

On this Wednesday (10/25), *Sanae Lotfi* (NYU) will be giving a talk titled
*"**Are the Marginal Likelihood and PAC-Bayes Bounds the right proxies for
Generalization?**"*.

*Title*: Are the Marginal Likelihood and PAC-Bayes Bounds the right proxies
for Generalization?

*Talk Abstract*: How do we compare between hypotheses that are entirely
consistent with observations? The marginal likelihood, which represents the
probability of generating our observations from a prior, provides a
distinctive approach to this foundational question. We first highlight the
conceptual and practical issues in using the marginal likelihood as a proxy
for generalization. Namely, we show how the marginal likelihood can be
negatively correlated with generalization and can lead to both underfitting
and overfitting in hyperparameter learning. We provide a partial remedy
through a conditional marginal likelihood, which we show to be more aligned
with generalization, and practically valuable for large-scale
hyperparameter learning, such as in deep kernel learning. PAC-Bayes bounds
are another expression of Occam’s razor where simpler descriptions of the
data generalize better. While there has been progress in developing tighter
PAC-Bayes bounds for deep neural networks, these bounds tend to be
uninformative about why deep learning works. In this talk, I will also
present our compression approach based on quantizing neural network
parameters in a linear subspace, which profoundly improves on previous
results to provide state-of-the-art generalization bounds on a variety of
tasks. We use these tight bounds to better understand the role of model
size, equivariance, and the implicit biases of optimization for
generalization in deep learning. Notably, our work shows that large models
can be compressed to a much greater extent than previously known. Finally,
I will discuss the connection between the marginal likelihood and PAC-Bayes
bounds for model selection.

*Speaker Bio:* Sanae Lotfi <https://sanaelotfi.github.io> is a PhD student
at NYU, advised by Professor Andrew Gordon Wilson. Sanae works on the
foundations of deep learning. Her goal is to understand and quantify
generalization in deep learning, and use this understanding to build more
robust and reliable machine learning models. Sanae's PhD research has been
recognized with an ICML Outstanding Paper Award and is generously supported
by the Microsoft and DeepMind Fellowships, the Meta AI Mentorship Program
and the NYU CDS Fellowship. Prior to joining NYU, Sanae obtained a Master’s
degree in applied mathematics from Polytechnique Montreal, where she worked
on designing stochastic first and second order algorithms with compelling
theoretical and empirical properties for large-scale optimization.

*In person: *GHC 6115
*Zoom Link*:
https://cmu.zoom.us/j/99510233317?pwd=ZGx4aExNZ1FNaGY4SHI3Qlh0YjNWUT09

Thanks,
Asher Trockman
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/ai-seminar-announce/attachments/20231022/20bb4f27/attachment.html>


More information about the ai-seminar-announce mailing list