<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><meta http-equiv="Content-Type" content="text/html; charset=utf-8" class=""><div style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><p style="caret-color: rgb(0, 0, 0);" class="">The FOX team from the CRIStAL laboratory (UMR CNRS), Lille France is looking to recruit a PhD student <b class="">starting as soon as possible </b>on the following subject : Spiking Neural Networks for Video Analysis<br class=""></p><span style="caret-color: rgb(0, 0, 0);" class="">The FOX research group is part of the CRIStAL laboratory (University of Lille, CNRS), located in Lille, France. We focus on video analysis for human behavior understanding. Specifically, we develop spatio-temporal models of motions for tasks such as abnormal event detection, emotion recognition, and face alignment. We are also involved in IRCICA (CNRS), a research institute promoting multidisciplanary research. At IRCICA, we collaborate with computer scientists and experts in electronics engineering to create new models of neural networks that can be implemented on low-power hardware architectures. Recently, we designed state-of-the-art models for image recognition with single and multi-layer unsupervised spiking neural networks. We were among the first to succesfully apply unsupervised SNNs on modern datasets of computer vision. We also developed our own SNN simulator to support experiments with SNN on computer vision problems.</span><br style="caret-color: rgb(0, 0, 0);" class=""><span style="caret-color: rgb(0, 0, 0);" class="">Our work is published in major journals (Pattern Recognition, IEEE Trans. on Affective Computing) and conferences (WACV, IJCNN) in the field.</span><br style="caret-color: rgb(0, 0, 0);" class=""><br style="caret-color: rgb(0, 0, 0);" class=""><b style="caret-color: rgb(0, 0, 0);" class="">Abstract</b><span style="caret-color: rgb(0, 0, 0);" class="">: Spiking Neural Network have recently been evaluated on classical image recognition tasks [1]. This work has highlighted their promising performances in this domain and have identified ways to improve them to be competitive with comparable deep learning approaches. In particular, it demonstrated the ability of SNN architectures to learn relevant patterns for static pattern recognition in an unsupervised manner. However, dealing with static images is not enough, and the computer vision community is increasingly interested in video analysis, for two reasons. First, video data is more and more common and corresponds to a wide range of applications (video surveillance, audio-visual productions, autonomous vehicles...). Second, this data is richer than isolated static images, and thus offers the possibility to develop more effective systems, e.g. using motion information. Thus, it is recognized in the community that modeling motion in videos is more relevant than studying visual appearance alone for tasks such as action or emotion recognition. The next step for SNNs is therefore to study their ability to model motion rather than, or in addition to, image appearance.</span><br style="caret-color: rgb(0, 0, 0);" class=""><br style="caret-color: rgb(0, 0, 0);" class=""><span style="caret-color: rgb(0, 0, 0);" class="">The goal of the Ph.D. candidate will be to explore the use of SNNs for space-time modeling in videos. This work will be targeted towards applications in human behavior understanding and especially action recognition. More specifically, the Ph.D. candidate is expected to:</span><br style="caret-color: rgb(0, 0, 0);" class=""><span style="caret-color: rgb(0, 0, 0);" class="">* identify what issues may prevent space-time modeling with SNNs and how they can be circumvented;</span><br style="caret-color: rgb(0, 0, 0);" class=""><span style="caret-color: rgb(0, 0, 0);" class="">* propose new supervised and unsupervised SNN models for motion modeling, which are compatible with hardware implementations on ultra-low power devices;</span><br style="caret-color: rgb(0, 0, 0);" class=""><span style="caret-color: rgb(0, 0, 0);" class="">* evaluate the proposed models on standard datasets for video analysis.</span><br style="caret-color: rgb(0, 0, 0);" class=""><p style="caret-color: rgb(0, 0, 0);" class="">Detailed subject: <a class="moz-txt-link-freetext" href="https://bit.ly/stssnnfox">https://bit.ly/stssnnfox</a><br class=""></p><span style="caret-color: rgb(0, 0, 0);" class="">Candidates must hold a Master degree (or an equivalent degree) in Computer Science, Statistics, Applied Mathematics or a related field. Experience in one or more of the following is a plus:</span><br style="caret-color: rgb(0, 0, 0);" class=""><span style="caret-color: rgb(0, 0, 0);" class="">• image processing, computer vision;</span><br style="caret-color: rgb(0, 0, 0);" class=""><span style="caret-color: rgb(0, 0, 0);" class="">• machine learning;</span><br style="caret-color: rgb(0, 0, 0);" class=""><span style="caret-color: rgb(0, 0, 0);" class="">• bio-inspired computing;</span><br style="caret-color: rgb(0, 0, 0);" class=""><span style="caret-color: rgb(0, 0, 0);" class="">• research methodology (literature review, experimentation…).</span><br style="caret-color: rgb(0, 0, 0);" class=""><br style="caret-color: rgb(0, 0, 0);" class=""><span style="caret-color: rgb(0, 0, 0);" class="">Candidates should have the following skills:</span><br style="caret-color: rgb(0, 0, 0);" class=""><span style="caret-color: rgb(0, 0, 0);" class="">• good proficiency in English, both spoken and written;</span><br style="caret-color: rgb(0, 0, 0);" class=""><span style="caret-color: rgb(0, 0, 0);" class="">• scientific writing;</span><br style="caret-color: rgb(0, 0, 0);" class=""><span style="caret-color: rgb(0, 0, 0);" class="">• programming (experience in C++ is a plus, but not mandatory).</span><br style="caret-color: rgb(0, 0, 0);" class=""><br style="caret-color: rgb(0, 0, 0);" class=""><span style="caret-color: rgb(0, 0, 0);" class="">This PHD thesis will be funded in the framework of the ANVI-Luxant industrial chair. The general objective of the Chair is to make a scientific and technological progress in the mastery of emerging information processing architectures such as neuromorphic architectures as an embedded artificial intelligence technique. The use-case studies will come from video protection in the context of retail and transportation.</span><br style="caret-color: rgb(0, 0, 0);" class=""><br style="caret-color: rgb(0, 0, 0);" class=""><span style="caret-color: rgb(0, 0, 0);" class="">The candidate will be funded for 3 years; he/she is expected to defend his/her thesis and graduate by the end of the contract. The monthly gross salary is around 2000€, including benefits (health insurance, retirement fund, and paid vacations).</span><br style="caret-color: rgb(0, 0, 0);" class=""><br style="caret-color: rgb(0, 0, 0);" class=""><span style="caret-color: rgb(0, 0, 0);" class="">The position is located in Lille, France. With over 110 000 students, the metropolitan area of Lille is one France's top education student cities. The European Doctoral College Lille Nord-Pas de Calais is headquartered in Lille Metropole and includes 3,000 PhD Doctorate students supported by university research laboratories. Lille has a convenient location in the European high-speed rail network. It lies on the Eurostar line to London (1:20 hour journey). The French TGV network also puts it only 1 hour from Paris, 35 mn from Brussels, and a short trips to other major centres in France such as Paris, Marseille and Lyon.</span><br style="caret-color: rgb(0, 0, 0);" class=""><br style="caret-color: rgb(0, 0, 0);" class=""><span style="caret-color: rgb(0, 0, 0);" class="">For application, please send the following information in a single PDF file to Dr. Marius Bilasco (</span><a class=" moz-txt-link-freetext moz-txt-link-abbreviated
" href="mailto:marius.bilasco@univ-lille.fr">marius.bilasco@univ-lille.fr</a><span style="caret-color: rgb(0, 0, 0);" class="">) with subject [PhD_Luxant-ANVI]:</span><br style="caret-color: rgb(0, 0, 0);" class=""><span style="caret-color: rgb(0, 0, 0);" class="">* A cover letter.</span><br style="caret-color: rgb(0, 0, 0);" class=""><span style="caret-color: rgb(0, 0, 0);" class="">* A curriculum vitae, including a list of publications, if any.</span><br style="caret-color: rgb(0, 0, 0);" class=""><span style="caret-color: rgb(0, 0, 0);" class="">* Transcripts of grades of Master's degree.</span><br style="caret-color: rgb(0, 0, 0);" class=""><span style="caret-color: rgb(0, 0, 0);" class="">* The contact information of two references (and any letters if available).</span><br style="caret-color: rgb(0, 0, 0);" class=""><br style="caret-color: rgb(0, 0, 0);" class=""><span style="caret-color: rgb(0, 0, 0);" class="">We look forward to receiving your application as soon as possible, but no later than 26.3.2022</span></div></body></html>