[CMU AI Seminar] Apr 27 at 12pm (Zoom) -- Bo Li (UIUC) -- Secure Learning in Adversarial Environments -- AI Seminar sponsored by Fortive

Shaojie Bai shaojieb at andrew.cmu.edu
Mon Apr 26 15:46:59 EDT 2021


Dear all,

Just a reminder that the CMU AI Seminar <http://www.cs.cmu.edu/~aiseminar/> is
tomorrow *1**2pm-1pm*:
https://cmu.zoom.us/j/92752459790?pwd=aWN0NXhxaitPTHlnaEZpYUFuNUk1UT09.

Bo Li (UIUC) (see details below) will be talking about adversarial attacks
and security problems in modern machine learning.

Thanks,
Shaojie

On Tue, Apr 20, 2021 at 1:15 PM Shaojie Bai <shaojieb at andrew.cmu.edu> wrote:

> Dear all,
>
> We look forward to seeing you *next Tuesday (4/27)* from *1**2:00-1:00 PM
> (U.S. Eastern time)* for the next talk of our *CMU AI seminar*, sponsored
> by Fortive <https://careers.fortive.com/>.
>
> To learn more about the seminar series or see the future schedule, please
> visit the seminar website <http://www.cs.cmu.edu/~aiseminar/>.
> <http://www.cs.cmu.edu/~aiseminar/>
>
> On 4/27, *Bo Li* (UIUC) will be giving a talk on "*Secure Learning in
> Adversarial Environments*".
>
> *Title*: Secure Learning in Adversarial Environments
>
> *Talk Abstract*: Advances in machine learning have led to rapid and
> widespread deployment of learning based inference and decision making for
> safety-critical applications, such as autonomous driving and security
> diagnostics. Current machine learning systems, however, assume that
> training and test data follow the same, or similar, distributions, and do
> not consider active adversaries manipulating either distribution. Recent
> work has demonstrated that motivated adversaries can circumvent anomaly
> detection or other machine learning models at test time through evasion
> attacks, or can inject well-crafted malicious instances into training data
> to induce errors in inference time through poisoning attacks. In this talk,
> I will describe my recent research about security and privacy problems in
> machine learning systems. In particular, I will introduce several
> adversarial attacks in different domains, and discuss potential defensive
> approaches and principles, including game theoretic based and knowledge
> enabled robust learning paradigms, towards developing practical robust
> learning systems with robustness guarantees.
>
> *Speaker Bio*: Dr. Bo Li is an assistant professor in the department of
> Computer Science at University of Illinois at Urbana-Champaign, and the
> recipient of the Symantec Research Labs Fellowship, Rising Stars, MIT
> Technology Review TR-35 award, Intel Rising Star award, Amazon Research
> Award, and best paper awards in several machine learning and security
> conferences. Previously she was a postdoctoral researcher in UC Berkeley.
> Her research focuses on both theoretical and practical aspects of security,
> machine learning, privacy, game theory, and adversarial machine learning.
> She has designed several robust learning algorithms, scalable frameworks
> for achieving robustness for a range of learning methods, and a privacy
> preserving data publishing system. Her work have been featured by major
> publications and media outlets such as Nature, Wired, Fortune, and New York
> Times.
>
> *Zoom Link*:
> https://cmu.zoom.us/j/92752459790?pwd=aWN0NXhxaitPTHlnaEZpYUFuNUk1UT09
>
> Thanks,
> Shaojie Bai (MLD)
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/ai-seminar-announce/attachments/20210426/a79c2376/attachment.html>


More information about the ai-seminar-announce mailing list