<div dir="ltr">A gentle reminder that the talk will happen tomorrow (Tuesday) noon at NSH 3305.</div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Oct 14, 2017 at 9:00 AM,  <span dir="ltr"><<a href="mailto:ai-seminar-announce-request@cs.cmu.edu" target="_blank">ai-seminar-announce-request@cs.cmu.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Send ai-seminar-announce mailing list submissions to<br>
        <a href="mailto:ai-seminar-announce@cs.cmu.edu">ai-seminar-announce@cs.cmu.edu</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
        <a href="https://mailman.srv.cs.cmu.edu/mailman/listinfo/ai-seminar-announce" rel="noreferrer" target="_blank">https://mailman.srv.cs.cmu.<wbr>edu/mailman/listinfo/ai-<wbr>seminar-announce</a><br>
or, via email, send a message with subject or body 'help' to<br>
        <a href="mailto:ai-seminar-announce-request@cs.cmu.edu">ai-seminar-announce-request@<wbr>cs.cmu.edu</a><br>
<br>
You can reach the person managing the list at<br>
        <a href="mailto:ai-seminar-announce-owner@cs.cmu.edu">ai-seminar-announce-owner@cs.<wbr>cmu.edu</a><br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of ai-seminar-announce digest..."<br>
<br>
<br>
Today's Topics:<br>
<br>
   1.  AI Seminar sponsored by Apple -- Xiaolong Wang --        October 17<br>
      (Adams Wei Yu)<br>
<br>
<br>
------------------------------<wbr>------------------------------<wbr>----------<br>
<br>
Message: 1<br>
Date: Sat, 14 Oct 2017 05:32:34 -0700<br>
From: Adams Wei Yu <<a href="mailto:weiyu@cs.cmu.edu">weiyu@cs.cmu.edu</a>><br>
To: <a href="mailto:ai-seminar-announce@cs.cmu.edu">ai-seminar-announce@cs.cmu.edu</a><br>
Subject: [AI Seminar] AI Seminar sponsored by Apple -- Xiaolong Wang<br>
        --      October 17<br>
Message-ID:<br>
        <<wbr>CABzq7epRGZ8vK1qBjUomXKynJDyD=<a href="mailto:WS2caQeNeaYLEiDBrB86w@mail.gmail.com"><wbr>WS2caQeNeaYLEiDBrB86w@mail.<wbr>gmail.com</a>><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
Dear faculty and students,<br>
<br>
We look forward to seeing you next Tuesday, October 17, at noon in NSH 3305<br>
for AI Seminar sponsored by Apple. To learn more about the seminar series,<br>
please visit the AI Seminar webpage <<a href="http://www.cs.cmu.edu/~aiseminar/" rel="noreferrer" target="_blank">http://www.cs.cmu.edu/~<wbr>aiseminar/</a>>.<br>
<br>
On Tuesday, Xiaolong Wang <<a href="http://www.cs.cmu.edu/~xiaolonw/" rel="noreferrer" target="_blank">http://www.cs.cmu.edu/~<wbr>xiaolonw/</a>> will give the<br>
following talk:<br>
<br>
Title:  Learning Visual Representations for Object Detection<br>
<br>
Abstract:<br>
<br>
Object detection is in the center of applications in computer vision. The<br>
current pipeline for training object detectors include ConvNet pre-training<br>
and fine-tuning. In this talk, I am going to cover our works on<br>
self-supervised/unsupervised ConvNet pre-training as well as optimization<br>
strategies on fine-tuning.<br>
<br>
For ConvNet pre-training, instead of using millions of labeled images, we<br>
explored to learn visual representations using supervisions from the data<br>
itself without any human labels, i.e., self-supervised learning.<br>
Specifically, we proposed to exploit different self-supervised approaches<br>
to learn representations invariant to (i) inter-instance variations (two<br>
objects in the same class should have similar features) and (ii)<br>
intra-instance variations (viewpoint, pose, deformations, illumination).<br>
Instead of combining two approaches with multi-task learning, we organized<br>
the data with multiple variations in a graph and applied simple transitive<br>
rules to generate pairs of images with richer visual invariance for<br>
training. This approach brings the object detection accuracies on MSCOCO<br>
dataset less than 1% away from methods using large amount of labeled data<br>
(e.g., ImageNet).<br>
<br>
For object detection fine-tuning, we proposed to train object detectors<br>
invariant to occlusions and deformations. The common solution is to use a<br>
data-driven strategy -- collect large-scale datasets which have object<br>
instances under different conditions. However, like categories, occlusions<br>
and object deformations also follow a long-tail. Some occlusions and<br>
deformations are so rare that they hardly happen; yet we want to learn a<br>
model invariant to such occurrences. In this talk, we propose to learn an<br>
adversarial network that generates examples with occlusions and<br>
deformations. The goal of the adversary is to generate examples that are<br>
difficult for the object detector to classify. In our framework both the<br>
original detector and adversary are learned in a joint manner. We show<br>
significant improvements on different datasets (VOC, COCO) with different<br>
network architectures (AlexNet, VGG16, ResNet101).<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://mailman.srv.cs.cmu.edu/pipermail/ai-seminar-announce/attachments/20171014/763f531f/attachment-0001.html" rel="noreferrer" target="_blank">http://mailman.srv.cs.cmu.<wbr>edu/pipermail/ai-seminar-<wbr>announce/attachments/20171014/<wbr>763f531f/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Subject: Digest Footer<br>
<br>
______________________________<wbr>_________________<br>
ai-seminar-announce mailing list<br>
<a href="mailto:ai-seminar-announce@cs.cmu.edu">ai-seminar-announce@cs.cmu.edu</a><br>
<a href="https://mailman.srv.cs.cmu.edu/mailman/listinfo/ai-seminar-announce" rel="noreferrer" target="_blank">https://mailman.srv.cs.cmu.<wbr>edu/mailman/listinfo/ai-<wbr>seminar-announce</a><br>
<br>
------------------------------<br>
<br>
End of ai-seminar-announce Digest, Vol 77, Issue 4<br>
******************************<wbr>********************<br>
</blockquote></div><br></div>