[AI Seminar] AI Seminar sponsored by Fortive -- Feb 09 (Zoom) -- Michael Auli (FAIR) -- Self-supervised Learning of Speech Representations with wav2vec

Shaojie Bai shaojieb at andrew.cmu.edu
Tue Feb 2 07:49:45 EST 2021


Dear all,

Happy new year, and welcome back!

We look forward to seeing you *next Tuesday (2/9)* from 12:00-1:00 PM (U.S.
Eastern time) for the first talk this semester of our *CMU AI seminar*,
sponsored by Fortive <https://careers.fortive.com/>.

To learn more about the seminar series, subscribe to its mailing list, or
see the future schedule, please visit the seminar website
<http://www.cs.cmu.edu/~aiseminar/>. <http://www.cs.cmu.edu/~aiseminar/>

On 2/9, Michael Auli <https://michaelauli.github.io/> (Facebook AI
Research) will be giving a talk on "*Self-supervised Learning of Speech
Representations with wav2vec*."

*Title*: Self-supervised Learning of Speech Representations with wav2vec

*Talk Abstract*: Self-supervised learning has been a key driver of progress
in natural language processing and increasingly in computer vision. In this
talk, I will give an overview of the wav2vec line of work which explores
algorithms to learn good representations of speech audio solely from
unlabeled data. The resulting models can be fine-tuned for a specific task
using labeled data and enable speech recognition models with just 10
minutes of labeled speech audio by leveraging a large amount of unlabeled
speech. Our latest work, wav2vec 2.0 learns a vocabulary of speech units
obtained by quantizing the latent representation of the speech signal and
by solving a contrastive task defined over the quantization. We also
explored multilingual pre-training and recently released a model trained on
53 different languages.

*Speaker Bio*: Michael Auli is a scientist at Facebook AI Research in Menlo
Park, California. During his PhD, he worked on natural language processing
and parsing at the University of Edinburgh where he was advised by Adam
Lopez and Philipp Koehn. While at Microsoft Research, he did some of the
early work on neural machine translation and neural dialogue models. After
this, he led the team which developed convolutional sequence to sequence
models that were the first models to outperform recurrent neural networks
for neural machine translation. Currently, Michael works on semi-supervised
and self-supervised learning applied to natural language processing and
speech recognition. He led the teams which ranked first in several tasks of
the WMT news translation task in 2018 and 2019.

*Zoom Link*:
https://cmu.zoom.us/j/91735176241?pwd=WmhzdWdJT2IxN2Y4ZGt6WnduKzRDUT09
<https://cmu.zoom.us/j/91248740598?pwd=V1M2L09xRlR6V3gvK2VVNERzOHRuQT09>


Thanks,
Shaojie Bai (MLD)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/ai-seminar-announce/attachments/20210202/92162653/attachment.html>


More information about the ai-seminar-announce mailing list