<html><body><div style="font-family: arial, helvetica, sans-serif; font-size: 12pt; color: #000000"><div data-marker="__HEADERS__"><div><strong data-mce-style="box-sizing: border-box; font-weight: bold;" style="box-sizing: border-box; font-weight: bold;">Automatic classification using deep learning of hate speech posted on the Internet</strong></div><div><strong data-mce-style="box-sizing: border-box; font-weight: bold;" style="box-sizing: border-box; font-weight: bold;"><br data-mce-bogus="1"></strong></div><div><strong data-mce-style="box-sizing: border-box; font-weight: bold;" style="box-sizing: border-box; font-weight: bold;"><br data-mce-bogus="1"></strong></div><div><strong data-mce-style="box-sizing: border-box; font-weight: bold;" style="box-sizing: border-box; font-weight: bold;">Supervisors: </strong>Irina Illina, MdC, HDR, Dominique Fohr, CR CNRS</div><div><strong data-mce-style="box-sizing: border-box; font-weight: bold;" style="box-sizing: border-box; font-weight: bold;">Team:</strong> Multispeech, LORIA-INRIA, France</div><div><strong data-mce-style="box-sizing: border-box; font-weight: bold;" style="box-sizing: border-box; font-weight: bold;">Contact:</strong> illina@loria.fr, dominique.fohr@loria.fr</div></div><div data-marker="__QUOTED_TEXT__"><div><br></div><div><strong data-mce-style="box-sizing: border-box; font-weight: bold;" style="box-sizing: border-box; font-weight: bold;">Duration of PhD Thesis</strong> : 3 years<br></div><div style="font-family: arial, helvetica, sans-serif; font-size: 12pt; color: #000000;" data-mce-style="font-family: arial, helvetica, sans-serif; font-size: 12pt; color: #000000;"><div><strong style="box-sizing: border-box; font-weight: bold;" data-mce-style="box-sizing: border-box; font-weight: bold;">Deadline to apply</strong> : August 15th, 2019</div><div><strong style="box-sizing: border-box; font-weight: bold;" data-mce-style="box-sizing: border-box; font-weight: bold;">Required skills: </strong>background in statistics, natural language processing and computer program skills (Perl, Python). Candidates should email a detailed CV with diploma</div><div><strong style="box-sizing: border-box; font-weight: bold;" data-mce-style="box-sizing: border-box; font-weight: bold;"><br></strong></div><div><strong style="box-sizing: border-box; font-weight: bold;" data-mce-style="box-sizing: border-box; font-weight: bold;">Keywords:</strong> hate speech, social media, natural language processing.</div><br><div>The rapid development of the Internet and social networks has brought great benefits to women and men in their daily lives. Unfortunately, the dark side of these benefits has led to an<span> </span><strong style="box-sizing: border-box; font-weight: bold;" data-mce-style="box-sizing: border-box; font-weight: bold;">increase in hate speech and terrorism as the most common and powerful threats on a global scale.</strong><span> </span>Hate speech is a type of offensive communication mechanism that expresses an ideology of hatred often using stereotypes. Hate speech can target different societal characteristics such as gender, religion, race, disability, etc. Hate speech is the subject of different national and international legal frameworks. Hate speech is a type of terrorism and often follows a terrorist incident or event.</div><div><strong style="box-sizing: border-box; font-weight: bold;" data-mce-style="box-sizing: border-box; font-weight: bold;"><br></strong></div><div><strong style="box-sizing: border-box; font-weight: bold;" data-mce-style="box-sizing: border-box; font-weight: bold;"><br></strong></div><div><strong style="box-sizing: border-box; font-weight: bold;" data-mce-style="box-sizing: border-box; font-weight: bold;">Social networks are incredibly popular today.</strong><span> </span>Nowadays, Twitter, LinkedIn, Facebook and YouTube are used as a standard tool for communicating ideas, beliefs and feelings. Only a small percentage of people use part of the network for unhealthy activities such as hate speech and terrorism. But the impact of this low percentage of users is<span> </span><strong style="box-sizing: border-box; font-weight: bold;" data-mce-style="box-sizing: border-box; font-weight: bold;">extremely damaging</strong>. For years, social media companies such as Twitter, Facebook and YouTube have invested hundreds of millions of dollars each year in the task of detecting, classifying and moderating hate. But these efforts are mainly based on<span> </span><strong style="box-sizing: border-box; font-weight: bold;" data-mce-style="box-sizing: border-box; font-weight: bold;">manually revising the content</strong><span> </span>to identify and remove offensive content, which<span> </span><strong style="box-sizing: border-box; font-weight: bold;" data-mce-style="box-sizing: border-box; font-weight: bold;">is extremely expensive.</strong></div><br><div>This thesis aims at<span> </span><strong style="box-sizing: border-box; font-weight: bold;" data-mce-style="box-sizing: border-box; font-weight: bold;">designing automatic and evolving methods for the classification of hate speech in the field of social media.</strong><span> </span>Despite the studies already published on this subject, the results show that the task remains very difficult. We will use semantic content analysis methodologies from automatic language processing (NLP) and methodologies based on deep learning (DNN) which is the revolution in the field of artificial intelligence. During this thesis,<span> </span><strong style="box-sizing: border-box; font-weight: bold;" data-mce-style="box-sizing: border-box; font-weight: bold;">we will develop a research protocol to classify hate speech in the text in terms of hateful, aggressive, insulting, ironic, neutral, etc. character</strong>. This type of problem is placed in the context of the<span> </span><strong style="box-sizing: border-box; font-weight: bold;" data-mce-style="box-sizing: border-box; font-weight: bold;">multi-label classification</strong>.</div><br><div>In addition, the problem<span> </span><strong style="box-sizing: border-box; font-weight: bold;" data-mce-style="box-sizing: border-box; font-weight: bold;">of obfuscation of words in hate messages will need to be addressed</strong>. People who want to write hate speech on the Internet know that they risk being censored by rudimentary automatic systems of moderation. So, users try to obscure their words by changing the spelling or the spelling of words.</div><br><div>Among the crucial points of this thesis are the choice of the DNN architecture and the relevant representation of the data, ie the text of the internet message. The system designed will be validated on real flows of social networks.</div><div><strong style="box-sizing: border-box; font-weight: bold;" data-mce-style="box-sizing: border-box; font-weight: bold;"><br></strong></div><div><strong style="box-sizing: border-box; font-weight: bold;" data-mce-style="box-sizing: border-box; font-weight: bold;">Skills</strong></div><br><div>Strong background in mathematics, machine learning (DNN), statistics</div><br><div>Following profiles are welcome, either: Strong experience with natural language processing</div><br><div>Excellent English writing and speaking skills are required in any case.</div><div><strong style="box-sizing: border-box; font-weight: bold;" data-mce-style="box-sizing: border-box; font-weight: bold;"><br></strong></div><div><strong style="box-sizing: border-box; font-weight: bold;" data-mce-style="box-sizing: border-box; font-weight: bold;">References :</strong></div><div><em style="box-sizing: border-box;" data-mce-style="box-sizing: border-box;">T Gröndahl, L Pajola, M Juuti, M Conti, N Asokan (2018) “</em>All You Need is” Love”: Evading Hate-speech Detection<em style="box-sizing: border-box;" data-mce-style="box-sizing: border-box;">, arXiv preprint arXiv:1808.09115</em></div><div><em style="box-sizing: border-box;" data-mce-style="box-sizing: border-box;">Wiegand, M., Klakow, D. (2008).</em><span> </span>Optimizing Language Models for Polarity Classification.<span> </span><em style="box-sizing: border-box;" data-mce-style="box-sizing: border-box;">In Proceedings of ECIR</em>, pp. 612-616.</div><div><em style="box-sizing: border-box;" data-mce-style="box-sizing: border-box;">Wiegand, M., Ruppenhofer, J. (2015).</em><span> </span>Opinion Holder and Target Extraction based on the Induction of Verbal Categories.<span> </span><em style="box-sizing: border-box;" data-mce-style="box-sizing: border-box;">In Proceedings of CoNLL</em>, pp. 215-225.</div><div><em style="box-sizing: border-box;" data-mce-style="box-sizing: border-box;">Wiegand</em><em style="box-sizing: border-box;" data-mce-style="box-sizing: border-box;">, M.</em><em style="box-sizing: border-box;" data-mce-style="box-sizing: border-box;">, Ruppenhofer J., Schmidt</em><em style="box-sizing: border-box;" data-mce-style="box-sizing: border-box;"><span> </span>A.,<span> </span></em><em style="box-sizing: border-box;" data-mce-style="box-sizing: border-box;"> C. Greenberg</em><em style="box-sizing: border-box;" data-mce-style="box-sizing: border-box;"><span> </span>(2018)</em><span> </span>Inducing a Lexicon of Abusive Words – A Feature-Based Approach.<span> </span><em style="box-sizing: border-box;" data-mce-style="box-sizing: border-box;">In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.</em></div><div><em style="box-sizing: border-box;" data-mce-style="box-sizing: border-box;">Wiegand, M., Wolf, M., Ruppenhofer, J. (2017)</em><span> </span>Negation Modeling for German Polarity Classification.<span> </span><em style="box-sizing: border-box;" data-mce-style="box-sizing: border-box;">In Proceedings of GSCL.</em></div><div><em style="box-sizing: border-box;" data-mce-style="box-sizing: border-box;">Zhang Z., Luo L. (2018).</em><span> </span>Hate speech detection: a solved problem? The Challenging Case of Long Tail on Twitter.<span> </span><em style="box-sizing: border-box;" data-mce-style="box-sizing: border-box;">arxiv.org/pdf/1803.03662</em></div></div><br></div><div><br></div><div data-marker="__SIG_POST__">-- <br></div><div>Irina Illina<br><br>Associate Professor <br>Lorraine University<br>LORIA-INRIA<br>Multispeech Team<br>office C147 <br>Building C <br>615 rue du Jardin Botanique<br>54600 Villers-les-Nancy Cedex<br>Tel:+ 33 3 54 95 84 90</div></div></body></html>