[CMU AI Seminar] Nov 23 at 12pm (Zoom) -- Eric Wallace (UC Berkeley) -- What Can We Learn From Vulnerabilities of ML Models? -- AI Seminar sponsored by Morgan Stanley

Shaojie Bai shaojieb at cs.cmu.edu
Mon Nov 22 16:10:57 EST 2021


Hi all,

Just a reminder that the CMU AI Seminar <http://www.cs.cmu.edu/~aiseminar/> is
tomorrow *12pm-1pm*:
https://cmu.zoom.us/j/96673560117?pwd=WHFMWERjWkphbnlNMWl2cmk5aE1QZz09.

*Eric Wallace* (UC Berkeley) will be talking about large-scale NLP models
and the their vulnerabilities (see below)!

--------------------------------------
*Title:*  What Can We Learn From Vulnerabilities of ML Models?

*Talk Abstract:* Today's neural network models achieve high accuracy on
in-distribution data and are being widely deployed in production systems.
This talk will discuss attacks on such models that not only expose
worrisome security and privacy vulnerabilities, but also provide new
perspectives into how and why the models work. Concretely, I will show how
realistic adversaries can extract secret training data, steal model
weights, and manipulate test predictions, all using black-box access to
models at either training- or test-time. These attacks will reveal
different insights, including how NLP models rely on dataset biases and
spurious correlations, and how their training dynamics impact memorization
of examples. Finally, I will discuss defenses against these vulnerabilities
and suggest practical takeaways for developing secure ML systems.

*Speaker Bio: *Eric Wallace is a 3rd year PhD student at UC Berkeley
advised by Dan Klein and Dawn Song. His research interests center around
large language models and making them more secure, private, and robust.
Eric's work received the best demo award at EMNLP 2019.
--------------------------------------

Thanks and Happy Thanksgiving,
Shaojie


On Fri, Nov 19, 2021 at 12:51 PM Shaojie Bai <shaojieb at cs.cmu.edu> wrote:

> Dear all,
>
> We look forward to seeing you *next Tuesday (11/23)* from *1**2:00-1:00
> PM (U.S. Eastern time)* for the next talk of our *CMU AI Seminar*,
> sponsored by Morgan Stanley
> <https://www.morganstanley.com/about-us/technology/>.
>
> To learn more about the seminar series or see the future schedule, please
> visit the seminar website <http://www.cs.cmu.edu/~aiseminar/>.
>
> On 11/23, *Eric Wallace* (UC Berkeley) will be giving a talk on "*What
> can we learn from vulnerabilities of ML models*" and sharing his latest
> research on large-scale NLP models and their vulnerabilities.
>
> *Title:*  What Can We Learn From Vulnerabilities of ML Models?
>
> *Talk Abstract:* Today's neural network models achieve high accuracy on
> in-distribution data and are being widely deployed in production systems.
> This talk will discuss attacks on such models that not only expose
> worrisome security and privacy vulnerabilities, but also provide new
> perspectives into how and why the models work. Concretely, I will show how
> realistic adversaries can extract secret training data, steal model
> weights, and manipulate test predictions, all using black-box access to
> models at either training- or test-time. These attacks will reveal
> different insights, including how NLP models rely on dataset biases and
> spurious correlations, and how their training dynamics impact memorization
> of examples. Finally, I will discuss defenses against these vulnerabilities
> and suggest practical takeaways for developing secure ML systems.
>
> *Speaker Bio: *Eric Wallace is a 3rd year PhD student at UC Berkeley
> advised by Dan Klein and Dawn Song. His research interests center around
> large language models and making them more secure, private, and robust.
> Eric's work received the best demo award at EMNLP 2019.
>
> *Zoom Link: *
> https://cmu.zoom.us/j/96673560117?pwd=WHFMWERjWkphbnlNMWl2cmk5aE1QZz09
>
> Best,
> Shaojie Bai (MLD)
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/ai-seminar-announce/attachments/20211122/328d5d59/attachment.html>


More information about the ai-seminar-announce mailing list