[AI Seminar] Fwd: AI Lunch -- Stephan Mandt -- September 13th, 2016
vitercik at cs.cmu.edu
Mon Sep 12 08:59:08 EDT 2016
This is a reminder that this talk is tomorrow, Tuesday, September 13th.
---------- Forwarded message ----------
From: Ellen Vitercik <vitercik at cs.cmu.edu>
Date: Wed, Sep 7, 2016 at 12:57 PM
Subject: AI Lunch -- Stephan Mandt -- September 13th, 2016
To: ai-seminar-announce at cs.cmu.edu, Stephan Mandt <
stephan.mandt at disneyresearch.com>
Dear faculty and students,
We look forward to seeing you this Tuesday, September 13th, at noon in NSH
3305 for AI lunch. To learn more about the seminar and lunch, please visit
the AI Lunch webpage <http://www.cs.cmu.edu/~aiseminar/>.
On Tuesday, Stephan Mandt <http://www.stephanmandt.com>, a Research
Scientist at Disney Research Pittsburgh, will give a talk titled
"Variational Inference: From Artificial Temperatures to Stochastic
*Abstract:* Bayesian modeling is a popular approach to solving machine
learning problems. In this talk, we will first review variational
inference, where we map Bayesian inference to an optimization problem. This
optimization problem is non-convex, meaning that there are many local
optima that correspond to poor fits of the data. We first show that by
introducing a “local temperature” to every data point and applying the
machinery of variational inference, we can avoid some of these poor optima,
suppress the effects of outliers, and ultimately find more meaningful
patterns. In the second part of the talk, we will then present a Bayesian
view on Stochastic Gradient Descent (SGD). When operated with a constant,
non-decreasing learning rates, SGD first marches towards the optimum of the
objective and then samples from a stationary distribution that is centered
around the optimum. As such, SGD resembles Markov Chain Monte Carlo (MCMC)
algorithms which, after a burn-in period, draw samples from a Bayesian
posterior. Drawing on the tools of variational inference, we investigate
and formalize this connection. Our analysis reveals criteria that allow us
to use SGD as an approximate scalable MCMC algorithm that can compete with
more complicated state-of-the-art Bayesian approaches.
*Speaker bio: *Stephan Mandt is a Research Scientist at Disney Research
Pittsburgh where he leads the statistical machine learning group.
Previously, he was a postdoctoral researcher with David Blei at Columbia
University, where he worked on scalable approximate Bayesian inference
algorithms. Trained as a statistical physicist, he held a previous
postdoctoral fellowship at Princeton University and holds a Ph.D. from the
University of Cologne as a fellow of the German National Merit Foundation.
Personal website: www.stephanmandt.com
Ellen and Ariel
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the ai-seminar-announce