Connectionists: [jobs] PhD opportunity - Spatio-temporal augmentation models for motion pattern learning - AI_PhD at Lille, France
Ioan Marius BILASCO
marius.bilasco at univ-lille.fr
Thu Mar 24 08:42:39 EDT 2022
The FOX team from the CRIStAL laboratory (UMR CNRS), Lille France is
looking to recruit a PhD student starting on *October 1st 2022* on the
following subject : *Spatio-temporal data augmentation models for motion
pattern learning using deep learning: applications to facial analysis in
the wild*
The FOX research group is part of the CRIStAL laboratory (University of
Lille, CNRS), located in Lille, France. We focus on video analysis for
human behavior understanding. Specifically, we develop spatio-temporal
models of motions for tasks such as abnormal event detection, emotion
recognition, and face alignment. Our work is published in major journals
(Pattern Recognition, IEEE Trans. on Affective Computing) and
conferences (WACV, IJCNN).
This PHD thesis will be funded in the framework of the
*AI_PhD at Lille*program.
http://www.isite-ulne.fr/index.php/en/phd-in-artificial-intelligence/
The candidate will be funded for 3 years; he/she is expected to defend
his/her thesis and graduate by the end of the contract. The monthly net
salary is around *1800*€, including benefits (health insurance,
retirement fund, and paid vacations).
The position is located in *Lille, France*. With over 110 000 students,
the metropolitan area of Lille is one France's top education student
cities. The European Doctoral College Lille Nord-Pas de Calais is
headquartered in Lille Metropole and includes 3,000 PhD Doctorate
students supported by university research laboratories. Lille has a
convenient location in the European high-speed rail network. It lies on
the Eurostar line to London (1:20 hour journey). The French TGV network
also puts it only 1 hour from Paris, 35 mn from Brussels, and a short
trips to other major centres in France such as Paris, Marseille and Lyon.
*Abstract*: Facial expression analysis is a well-studied field when
dealing with segmented and constrained data captured in lab conditions.
However, many challenges must still be addressed for building
in-the-wild solutions that account for various motion intensities,
strong head movements during expressions, the spotting of the
subsequence containing the expression, partially occluded faces, etc. In
recent years, learned features based on deep learning architectures were
proposed in order to deal with these challenges. Deep learning is
characterized by neural architectures that depend on a huge number of
parameters. The convergence of these neural networks and the estimation
of optimal parameters require large amounts of training data, especially
when dealing with spatio-temporal data, particulary adequate for facial
expression recognition. The quantity, but also the quality, of the data
and its capacity to reflect the addressed challenges are key elements
for training properly the networks. Augmenting the data artificially in
an intelligent and controlled way is an interesting solution. The
augmentation techniques identified in the literature are mainly focused
on image augmentation and consist of scaling, rotation, and flipping
operations, or they make use of more complex adversarial training. These
techniques can be applied at the frame level, but there is a need for
sequence level augmentation in order to better control the augmentation
process and ensure the absence of temporal artifacts that might bias the
learning process. The generation of dynamic frontal facial expressions
has already been addressed in the literature. The goal of this Ph.D. is
to conceive new space-time augmentation methods for unconstrained facial
analysis (involving head movements, occultations, etc.). Attention
should be paid in assessing the quality standards related to facial
expression requirements: stability over time, absence of facial
artifacts, etc. More specifically, the Ph.D. candidate is expected to
conceive augmentation architectures that address various challenges
(motion intensities, head movements) while maintaining temporal
stability and eliminating facial artifacts.
More details are available here : https://bit.ly/staugm_motion
Candidates must hold a Master degree in Computer Science, Statistics,
Applied Mathematics or a related field. Experience in one or more of the
following is a plus:
• image processing, computer vision;
• machine learning;
• research methodology (literature review, experimentation…).
Candidates should have the following skills:
• good proficiency in English, both spoken and written;
• scientific writing;
• programming (experience in C++ is a plus, but not mandatory).
We look forward to receiving your application/as soon as possible/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220324/e77a93de/attachment.html>
More information about the Connectionists
mailing list