Connectionists: ICBINB Monthly Seminar Series Talk: Matt Hoffman

Francisco J. Rodríguez Ruiz franrruiz87 at gmail.com
Thu May 25 14:17:37 EDT 2023


Dear all,

We are pleased to announce that the next speaker of the *“I Can’t Believe
It’s Not Better!” (**ICBINB)* virtual seminar series will be *Matt Hoffman**
(**Google**)*. More details about this series and the talk are below.

The *"I Can't Believe It's Not Better!" (ICBINB) monthly online seminar
series* seeks to shine a light on the "stuck" phase of research. Speakers
will tell us about their most beautiful ideas that didn't "work", about
when theory didn't match practice, or perhaps just when the going got
tough. These talks will let us peek inside the file drawer of unexpected
results and peer behind the curtain to see the real story of *how real
researchers did real research*.

*When: *June 1st, 2023 at 4pm CEST / 10am EDT

*Where: *RSVP for the Zoom link here:
https://us02web.zoom.us/meeting/register/tZwtdOGprz0pGNJUCAKTX9xqrOCHlm5xZONJ

*Title:* *How (Not) to Be Bayesian in an Age of Giant Models*

*Abstract:** Giant neural networks have taken the world by storm. These
neural networks are first pretrained using self-supervision strategies to
soak up knowledge about common patterns in huge unlabeled datasets, and can
then be fine-tuned to solve specific problems using small amounts of
task-specific supervision. We might say that the pretrained model
implicitly expresses a strong prior on what kinds of patterns are relevant,
which implies that the ideal fine-tuning process must be some form of
Bayesian inference! But it turns out that operationalizing this insight is
harder than it sounds. I’ll discuss one of our explorations in this
direction in which, with a fair amount of thought, work, specialized
expertise, and extra computation, we were able to use Bayesian inference to
get results that were...almost as good as just fine-tuning using gradient
descent. Along the way, we (re)learned some lessons about prior
specification, sparsity, random-matrix theory, and the naive genius of
gradient descent.*

*Bio: **Matt Hoffman is a Research Scientist at Google. His main research
focus is in probabilistic modeling and approximate inference algorithms. He
has worked on various applications including music information retrieval,
speech enhancement, topic modeling, learning to rank, computer vision, user
interfaces, user behavior modeling, social network analysis, digital
imaging, and astronomy. He is a co-creator of the widely used statistical
modeling package Stan, and a contributor to the TensorFlow Probability
library.*

For more information and for ways to get involved, please visit us at
http://icbinb.cc/, Tweet to us @ICBINBWorkhop
<https://twitter.com/ICBINBWorkshop>, or email us at
cant.believe.it.is.not.better at gmail.com.

--
Best wishes,
The ICBINB Organizers
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230525/fe363064/attachment.html>


More information about the Connectionists mailing list