[CMU AI Seminar] Feb 8 at 12pm (Zoom) -- Huan Zhang (CMU) -- How We Trust a Black-box: Formal Verification of Deep Neural Networks -- AI Seminar sponsored by Morgan Stanley

Asher Trockman ashert at cs.cmu.edu
Tue Feb 8 11:59:40 EST 2022


Hi all,

The seminar today by Huan Zhang on neural network verification is happening
right now!

In case you are interested:
https://cmu.zoom.us/j/99510233317?pwd=ZGx4aExNZ1FNaGY4SHI3Qlh0YjNWUT09

Thanks,
Asher

On Mon, Feb 7, 2022 at 12:12 PM Asher Trockman <ashert at cs.cmu.edu> wrote:

> Hi all,
>
> Just a reminder that the CMU AI Seminar
> <http://www.cs.cmu.edu/~aiseminar/> is tomorrow *12pm-1pm*:
> https://cmu.zoom.us/j/99510233317?pwd=ZGx4aExNZ1FNaGY4SHI3Qlh0YjNWUT09.
>
> *Huan Zhang (CMU)* will be talking about his award-winning neural network
> verification techniques, as well as neural network verification more
> generally.
>
> Thanks,
> Asher
>
> On Fri, Feb 4, 2022 at 12:41 PM Asher Trockman <ashert at cs.cmu.edu> wrote:
>
>> Dear all,
>>
>> Welcome to the CMU AI Seminar for the Spring 2022 semester!
>>
>> We look forward to seeing you *next Tuesday (2/8)* from *1**2:00-1:00 PM
>> (U.S. Eastern time)* for the next talk of our *CMU AI seminar*,
>> sponsored by Morgan Stanley
>> <https://www.morganstanley.com/about-us/technology/>.
>>
>> To learn more about the seminar series or see the future schedule, please
>> visit the seminar website <http://www.cs.cmu.edu/~aiseminar/>.
>>
>> On 2/8, *Huan Zhang* (CMU) will be giving a talk titled "*How We Trust a
>> Black-box: Formal Verification of Deep Neural Networks*" to explain
>> state-of-the-art neural network verification techniques.
>>
>> *Title*: How We Trust a Black-box: Formal Verification of Deep Neural
>> Networks
>>
>> *Talk Abstract*: Neural networks have become a crucial element in modern
>> artificial intelligence. However, they are often black-boxes and can behave
>> unexpectedly and produce surprisingly wrong results. When applying neural
>> networks to mission-critical systems such as autonomous driving and
>> aircraft control, it is often desirable to formally verify their
>> trustworthiness such as safety and robustness. In this talk, I will first
>> introduce the problem of neural network verification and the challenges
>> involved to guarantee neural network output given bounded input
>> perturbations. Then, I will discuss the bound propagation based neural
>> network verification algorithms such as CROWN and beta-CROWN, which
>> efficiently propagate linear inequalities through the network in a backward
>> manner. My talk will highlight state-of-the-art verification techniques
>> used in our α,β-CROWN (alpha-beta-CROWN) verifier, a scalable, powerful and
>> GPU-accelerated neural network verifier that won the 2nd International
>> Verification of Neural Networks Competition (VNN-COMP’21) with the highest
>> total score.
>>
>> *Speaker Bio*: Huan Zhang is a postdoctoral researcher at CMU,
>> supervised by Prof. Zico Kolter. He received his Ph.D. degree at UCLA in
>> 2020. Huan's research focuses on the trustworthiness of artificial
>> intelligence, especially on developing formal verification methods to
>> guarantee the robustness and safety of machine learning. Huan was
>> awarded an IBM Ph.D. fellowship and he led the winning team in the 2021
>> International Verification of Neural Networks Competition. Huan received
>> the 2021 AdvML Rising Star Award sponsored by MIT-IBM Watson AI Lab.
>>
>> *Zoom Link*:
>> https://cmu.zoom.us/j/99510233317?pwd=ZGx4aExNZ1FNaGY4SHI3Qlh0YjNWUT09
>>
>> Thanks,
>> Asher Trockman
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/ai-seminar-announce/attachments/20220208/b2f8586c/attachment-0001.html>


More information about the ai-seminar-announce mailing list