<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">
  
    <meta http-equiv="content-type" content="text/html; charset=UTF-8" class="">
  
  <div class="">
    <div aria-live="assertive" id="magicdomid229" class="ace-line"><span class="author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt">The
        FOX team from the CRIStAL laboratory (UMR CNRS), Lille France is
        looking to recruit a PhD student starting on <b class="">October 1st
          2022</b> on the following subject : </span><span class="b author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt"><b class="">Spatio-temporal
          data augmentation models for motion pattern learning using
          deep learning: applications to facial analysis in the wild</b></span></div>
    <div aria-live="assertive" id="magicdomid41" class="ace-line"><br class="">
    </div>
    <div aria-live="assertive" id="magicdomid42" class="ace-line"><span class="author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt">The
        FOX research group is part of the CRIStAL laboratory (University
        of Lille, CNRS), located in Lille, France. We focus on video
        analysis for human behavior understanding. Specifically, we
        develop spatio-temporal models of motions for tasks such as
        abnormal event detection, emotion recognition, and face
        alignment. Our work is published in major journals (Pattern
        Recognition, IEEE Trans. on Affective Computing) and conferences
        (WACV, IJCNN).</span></div>
    <div aria-live="assertive" id="magicdomid43" class="ace-line"><br class="">
    </div>
    <div aria-live="assertive" id="magicdomid44" class="ace-line"><span class="b author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt"><b class="">Abstract</b></span><span class="author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt">:
        Facial expression analysis is a well-studied field when dealing
        with segmented and constrained data captured in lab conditions.
        However, many challenges must still be addressed for building
        in-the-wild solutions that account for various motion
        intensities, strong head movements during expressions, the
        spotting of the subsequence containing the expression, partially
        occluded faces, etc. In recent years, learned features based on
        deep learning architectures were proposed in order to deal with
        these challenges. Deep learning is characterized by neural
        architectures that depend on a huge number of parameters. The
        convergence of these neural networks and the estimation of
        optimal parameters require large amounts of training data,
        especially when dealing with spatio-temporal data, particulary
        adequate for facial expression recognition. The quantity, but
        also the quality, of the data and its capacity to reflect the
        addressed challenges are key elements for training properly the
        networks. Augmenting the data artificially in an intelligent and
        controlled way is an interesting solution. The augmentation
        techniques identified in the literature are mainly focused on
        image augmentation and consist of scaling, rotation, and
        flipping operations, or they make use of more complex
        adversarial training. These techniques can be applied at the
        frame level, but there is a need for sequence level augmentation
        in order to better control the augmentation process and ensure
        the absence of temporal artifacts that might bias the learning
        process. The generation of dynamic frontal facial expressions
        has already been addressed in the literature. The goal of this
        Ph.D. is to conceive new space-time augmentation methods for
        unconstrained facial analysis (involving head movements,
        occultations, etc.). Attention should be paid in assessing the
        quality standards related to facial expression requirements:
        stability over time, absence of facial artifacts, etc. More
        specifically, the Ph.D. can</span></div>
    <div aria-live="assertive" id="magicdomid45" class="ace-line"><span class="author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt">didate
        is expected to conceive augmentation architectures that address
        various challenges (motion intensities, head movements) while
        maintaining temporal stability and eliminating facial artifacts.</span></div>
    <div aria-live="assertive" id="magicdomid46" class="ace-line"><br class="">
    </div>
    <div aria-live="assertive" id="magicdomid47" class="ace-line"><span class="author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt">More
        details are available here : </span><span class="url author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt"><a href="http://bit.ly/st_augm_motion" rel="noreferrer noopener" class="moz-txt-link-freetext">https://bit.ly/staugm_motion</a></span></div>
    <div aria-live="assertive" id="magicdomid48" class="ace-line"><br class="">
    </div>
    <div aria-live="assertive" id="magicdomid49" class="ace-line"><span class="author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt">Candidates
        must hold a Master degree in Computer Science, Statistics,
        Applied Mathematics or a related field. Experience in one or
        more of the following is a plus:</span></div>
    <div aria-live="assertive" id="magicdomid50" class="ace-line"><span class="author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt">•   
        image processing, computer vision;</span></div>
    <div aria-live="assertive" id="magicdomid51" class="ace-line"><span class="author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt">•   
        machine learning;</span></div>
    <div aria-live="assertive" id="magicdomid52" class="ace-line"><span class="author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt">•   
        research methodology (literature review, experimentation…).</span></div>
    <div aria-live="assertive" id="magicdomid53" class="ace-line"><br class="">
    </div>
    <div aria-live="assertive" id="magicdomid54" class="ace-line"><span class="author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt">Candidates
        should have the following skills:</span></div>
    <div aria-live="assertive" id="magicdomid55" class="ace-line"><span class="author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt">•   
        good proficiency in English, both spoken and written;</span></div>
    <div aria-live="assertive" id="magicdomid56" class="ace-line"><span class="author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt">•   
        scientific writing;</span></div>
    <div aria-live="assertive" id="magicdomid57" class="ace-line"><span class="author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt">•   
        programming (experience in C++ is a plus, but not mandatory).</span></div>
    <div aria-live="assertive" id="magicdomid58" class="ace-line"><br class="">
    </div>
    <div aria-live="assertive" id="magicdomid59" class="ace-line"><span class="author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt">This
        PHD thesis will be funded in the framework of the </span><span class="b author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt"><b class="">AI_PhD@Lille</b></span><span class="author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt">program.</span></div>
    <div aria-live="assertive" id="magicdomid60" class="ace-line"><span class="url author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt"><a href="http://www.isite-ulne.fr/index.php/en/phd-in-artificial-intelligence/" rel="noreferrer noopener" class="moz-txt-link-freetext">http://www.isite-ulne.fr/index.php/en/phd-in-artificial-intelligence/</a></span></div>
    <div aria-live="assertive" id="magicdomid61" class="ace-line"><span class="url author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt"><br class="">
      </span></div>
    <div aria-live="assertive" id="magicdomid63" class="ace-line"><span class="author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt">The
        candidate will be funded for 3 years; he/she is expected to
        defend his/her thesis and graduate by the end of the contract.
        The monthly gross salary is around 2000€, including benefits
        (health insurance, retirement fund, and paid vacations).
        Additional financial support is expected in the framework of the
      </span><span class="b author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt"><b class="">AI_PhD@Lille</b></span><span class="author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt">
        program.</span></div>
    <div aria-live="assertive" id="magicdomid64" class="ace-line"><br class="">
    </div>
    <div aria-live="assertive" id="magicdomid65" class="ace-line"><span class="author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt">The
        position is located in </span><span class="b author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt"><b class="">Lille,
          France</b></span><span class="author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt">.
        With over 110 000 students, the metropolitan area of Lille is
        one France's top education student cities. The European Doctoral
        College Lille Nord-Pas de Calais is headquartered in Lille
        Metropole and includes 3,000 PhD Doctorate students supported by
        university research laboratories. Lille has a convenient
        location in the European high-speed rail network. It lies on the
        Eurostar line to London (1:20 hour journey). The French TGV
        network also puts it only 1 hour from Paris, 35 mn from
        Brussels, and a short trips to other major centres in France
        such as Paris, Marseille and Lyon.</span></div>
    <div aria-live="assertive" id="magicdomid66" class="ace-line"><br class="">
    </div>
    <div aria-live="assertive" id="magicdomid234" class="ace-line"><span class="author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt">We
        look forward to receiving your application</span><span class="i author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt"><i class="">
          as soon as possible</i></span><span class="author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt">,
        but no later than </span><span class="b author-a-hdz81zz70zkz84z8z69zygz68zz70zz70zz70zz80zt"><b class="">26.03.2021.</b></span></div>
  </div>

</body></html>