<div dir="ltr">A gentle reminder that the talk will happen tomorrow (Tuesday) noon.</div><div class="gmail_extra"><br><div class="gmail_quote">On Sun, Nov 19, 2017 at 9:00 AM,  <span dir="ltr"><<a href="mailto:ai-seminar-announce-request@cs.cmu.edu" target="_blank">ai-seminar-announce-request@cs.cmu.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Send ai-seminar-announce mailing list submissions to<br>
        <a href="mailto:ai-seminar-announce@cs.cmu.edu">ai-seminar-announce@cs.cmu.edu</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
        <a href="https://mailman.srv.cs.cmu.edu/mailman/listinfo/ai-seminar-announce" rel="noreferrer" target="_blank">https://mailman.srv.cs.cmu.<wbr>edu/mailman/listinfo/ai-<wbr>seminar-announce</a><br>
or, via email, send a message with subject or body 'help' to<br>
        <a href="mailto:ai-seminar-announce-request@cs.cmu.edu">ai-seminar-announce-request@<wbr>cs.cmu.edu</a><br>
<br>
You can reach the person managing the list at<br>
        <a href="mailto:ai-seminar-announce-owner@cs.cmu.edu">ai-seminar-announce-owner@cs.<wbr>cmu.edu</a><br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of ai-seminar-announce digest..."<br>
<br>
<br>
Today's Topics:<br>
<br>
   1.  AI Seminar sponsored by Apple -- Vaishnavh Nagarajan     -- Nov<br>
      21 (Adams Wei Yu)<br>
<br>
<br>
------------------------------<wbr>------------------------------<wbr>----------<br>
<br>
Message: 1<br>
Date: Sun, 19 Nov 2017 05:46:06 -0800<br>
From: Adams Wei Yu <<a href="mailto:weiyu@cs.cmu.edu">weiyu@cs.cmu.edu</a>><br>
To: <a href="mailto:ai-seminar-announce@cs.cmu.edu">ai-seminar-announce@cs.cmu.edu</a><br>
Cc: Vaishnavh Nagarajan <<a href="mailto:vaishnavh.nagarajan@gmail.com">vaishnavh.nagarajan@gmail.com</a><wbr>><br>
Subject: [AI Seminar] AI Seminar sponsored by Apple -- Vaishnavh<br>
        Nagarajan       -- Nov 21<br>
Message-ID:<br>
        <<a href="mailto:CABzq7ertCO5ccBL7_h_8d6zzV6MZCQOiu5idosiMg4Y7ytPs3Q@mail.gmail.com">CABzq7ertCO5ccBL7_h_<wbr>8d6zzV6MZCQOiu5idosiMg4Y7ytPs3<wbr>Q@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
Dear faculty and students,<br>
<br>
We look forward to seeing you next Tuesday, Nov 21, at noon in NSH 3305 for<br>
AI Seminar sponsored by Apple. To learn more about the seminar series,<br>
please visit the AI Seminar webpage <<a href="http://www.cs.cmu.edu/~aiseminar/" rel="noreferrer" target="_blank">http://www.cs.cmu.edu/~<wbr>aiseminar/</a>>.<br>
<br>
On Tuesday, Vaishnavh Nagarajan<br>
<<a href="http://www.cs.cmu.edu/~vaishnan/home/index.html" rel="noreferrer" target="_blank">http://www.cs.cmu.edu/~<wbr>vaishnan/home/index.html</a>> will give the following<br>
talk:<br>
<br>
Title: Gradient Descent GANs are locally stable<br>
<br>
Abstract:<br>
<br>
Generative modeling, a core problem in unsupervised learning, aims at<br>
understanding data by learning a model that can generate datapoints that<br>
resemble the real-world distribution. Generative Adversarial Networks<br>
(GANs) are an increasingly popular framework that solve this by optimizing<br>
two deep networks, a "discriminator" and a "generator", in tandem.<br>
<br>
However, this complex optimization procedure is still poorly understood.<br>
More specifically, it was not known whether equilibrium points of this<br>
system are "locally asymptotically stable" i.e., when initialized<br>
sufficiently close to an equilibrium point, does the optimization procedure<br>
converge to that point? In this work, we analyze the "gradient descent"<br>
form of GAN optimization (i.e., the setting where we simultaneously take<br>
small gradient steps in both generator and discriminator parameters). We<br>
show that even though GAN optimization does not correspond to a<br>
convex-concave game, even for simple parameterizations, under proper<br>
conditions, its equilibrium points are still locally asymptotically stable.<br>
On the other hand, we show that for the recently-proposed Wasserstein GAN<br>
(WGAN), the optimization procedure might cycle around an equilibrium point<br>
without ever converging to it. Finally, motivated by this stability<br>
analysis, we propose an additional regularization term for GAN updates,<br>
which can guarantee local stability for both the WGAN and for the<br>
traditional GAN. Our regularizer also shows practical promise in speeding<br>
up convergence and in addressing a well-known failure mode in GANs called<br>
mode collapse.<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://mailman.srv.cs.cmu.edu/pipermail/ai-seminar-announce/attachments/20171119/e9f7ac27/attachment-0001.html" rel="noreferrer" target="_blank">http://mailman.srv.cs.cmu.<wbr>edu/pipermail/ai-seminar-<wbr>announce/attachments/20171119/<wbr>e9f7ac27/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Subject: Digest Footer<br>
<br>
______________________________<wbr>_________________<br>
ai-seminar-announce mailing list<br>
<a href="mailto:ai-seminar-announce@cs.cmu.edu">ai-seminar-announce@cs.cmu.edu</a><br>
<a href="https://mailman.srv.cs.cmu.edu/mailman/listinfo/ai-seminar-announce" rel="noreferrer" target="_blank">https://mailman.srv.cs.cmu.<wbr>edu/mailman/listinfo/ai-<wbr>seminar-announce</a><br>
<br>
------------------------------<br>
<br>
End of ai-seminar-announce Digest, Vol 78, Issue 5<br>
******************************<wbr>********************<br>
</blockquote></div><br></div>