[CMU AI Seminar] Special! Oct 4 at 12pm (GHC 8102 & Zoom) -- Felix Petersen (Stanford) -- Differentiable Logic Gate Networks and Sorting Networks -- AI Seminar sponsored by SambaNova Systems
Asher Trockman
ashert at cs.cmu.edu
Fri Oct 4 08:46:42 EDT 2024
Reminder this is happening today at noon!
On Wed, Oct 2, 2024 at 11:58 AM Asher Trockman <ashert at cs.cmu.edu> wrote:
> Dear all,
>
> We look forward to seeing you *this Friday (10/4)* from *1**2:00-1:00 PM
> (U.S. Eastern time)* for a special installment of this semester's
> *CMU AI Seminar*, sponsored by SambaNova Systems <https://sambanova.ai/>.
> The seminar will be held in GHC 8102 *with pizza provided *and will be
> streamed on Zoom.
>
> *📨 Please reply to this email for a signup sheet if you would like to
> meet with Felix this week.*
>
> To learn more about the seminar series or to see the future schedule,
> please visit the seminar website <http://www.cs.cmu.edu/~aiseminar/>.
>
> On *this Friday (10/4)*, *Felix Petersen* (Stanford) will be giving a
> talk titled *"Differentiable Logic Gate Networks and Sorting Networks**" *to
> explain how such networks enable *optimizing circuits for extremely
> efficient ML inference* and improving weakly- and self-supervised losses.
>
> *Title*: Differentiable Logic Gate Networks and Sorting Networks
>
> *Talk Abstract*: The ability to differentiate conventionally
> non-differentiable algorithms and operations is crucial for many advanced
> machine learning tasks. After an introduction of the topic, we will dive
> into differentiable sorting networks for learning-to-rank, top-k
> classification learning, and for self-supervised learning. In the second
> half, we will cover differentiable logic gate networks, which enable
> directly optimizing logical circuits along the paradigm of "the hardware is
> the model". Compared to the SOTA in the most efficient hardened models, we
> achieve chip area reductions ranging from 16x to over 200x, and latency
> reductions ranging from 130x to 31,000x. Compared to Int8 inference on
> GPUs, this corresponds to energy savings of around 7 orders of magnitude.
>
> *Speaker Bio:* Felix Petersen <http://petersen.ai> is a postdoctoral
> researcher at Stanford University in Stefano Ermon's group; he primarily
> researches differentiable relaxations in machine learning, with
> applications to extremely efficient inference and weakly-supervised
> learning. He runs the Differentiable Almost Everything workshop, and has
> previously worked at the University of Konstanz, TAU, DESY, PSI, and CERN.
>
> *In person: *GHC 8102
> *Zoom Link*:
> https://cmu.zoom.us/j/99510233317?pwd=ZGx4aExNZ1FNaGY4SHI3Qlh0YjNWUT09
>
> Thanks,
> Asher Trockman
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/ai-seminar-announce/attachments/20241004/d082062e/attachment.html>
More information about the ai-seminar-announce
mailing list