<div dir="ltr"><div class="gmail_quote"><div dir="ltr"><div class="gmail_quote"><div dir="ltr"><p dir="ltr" style="line-height:1.70526;margin-top:0pt;margin-bottom:12pt"><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">The Speech and Hearing Research Group at the University of Sheffield, UK currently has several open positions to work on topics related to speech and speech attribute recognition and analysis. We are looking for enthusiastic and highly skilled candidates that have a keen interest in machine learning as well as speech processing.</span></p><p dir="ltr" style="line-height:1.70526;margin-top:0pt;margin-bottom:12pt"><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">The positions are listed below. There are some URLs included that link to the formal application pages, and all positions will eventually appear on <a href="http://jobs.ac.uk" target="_blank">jobs.ac.uk</a> (only 2 are there at the moment). PLease contact <a href="mailto:t.hain@sheffield.ac.uk" target="_blank">t.hain@sheffield.ac.uk</a> for any further questions. </span></p><p dir="ltr" style="line-height:1.70526;margin-top:0pt;margin-bottom:12pt"><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Best regards</span></p><p dir="ltr" style="line-height:1.70526;margin-top:0pt;margin-bottom:12pt"><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Thomas</span></p><p dir="ltr" style="line-height:1.70526;margin-top:0pt;margin-bottom:12pt"><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">--</span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><br class="m_-3786251044212691804m_7836190032590322808gmail-kix-line-break"></span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Dr. Thomas Hain, Professor of Speech and Audio Technology</span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><br class="m_-3786251044212691804m_7836190032590322808gmail-kix-line-break"></span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Head Speech and Hearing Research, Sheffield</span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><br class="m_-3786251044212691804m_7836190032590322808gmail-kix-line-break"></span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><a href="http://www.dcs.shef.ac.uk/~th" target="_blank">http://www.dcs.shef.ac.uk/~th</a></span></p><p dir="ltr" style="line-height:1.70526;margin-top:0pt;margin-bottom:12pt"><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">===========================</span></p><ol style="margin-top:0pt;margin-bottom:0pt"><li dir="ltr" style="list-style-type:decimal;font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" style="line-height:1.70526;margin-top:0pt;margin-bottom:0pt"><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"> Research Associate in Deep Learning Methods for Speech Recognition </span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><br class="m_-3786251044212691804m_7836190032590322808gmail-kix-line-break"></span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">To work on advanced methods for automatic speech recognition with the aim towards building transferable multiple domain end to end recognition systems that are adaptive and can learn over long periods of time. </span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><br class="m_-3786251044212691804m_7836190032590322808gmail-kix-line-break"></span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">The position is funded by the Voicebase Centre for Speech and Language Technologies. Salaries are at University of Sheffield grade scales 7.1 - 7.9, and the opening is for 2 years, with the possibility of extension.  More details can be found at: </span><a href="https://www.jobs.ac.uk/job/BKA871/research-associate-in-deep-learning-for-speech-recognition/" style="text-decoration:none" target="_blank"><span style="font-size:8pt;font-family:"Courier New";color:rgb(17,85,204);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:underline;vertical-align:baseline;white-space:pre-wrap">https://www.jobs.ac.uk/job/BKA<wbr>871/research-associate-in-deep<wbr>-learning-for-speech-recogniti<wbr>on/</span></a></p></li><li dir="ltr" style="list-style-type:decimal;font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" style="line-height:1.70526;margin-top:0pt;margin-bottom:0pt"><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Research Associate in Multilingual Speech Recognition </span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><br class="m_-3786251044212691804m_7836190032590322808gmail-kix-line-break"></span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">To work on advanced deep learning methods to build robust and high performance methods for speech recognition that can work in the context of many languages and that allow fast building of recognition systems for new languages. </span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><br class="m_-3786251044212691804m_7836190032590322808gmail-kix-line-break"></span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">The position is funded by the Voicebase Centre for Speech and Language Technologies. Salaries are at University of Sheffield grade scales 7.1 - 7.9, and the opening is for 2 years, with the possibility of extension. More details can be found at: </span><a href="https://www.jobs.ac.uk/job/BKA854/research-associate-in-multi-lingual-automatic-speech-recognition/" style="text-decoration:none" target="_blank"><span style="font-size:8pt;font-family:"Courier New";color:rgb(17,85,204);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:underline;vertical-align:baseline;white-space:pre-wrap">https://www.jobs.ac.uk/job/BKA<wbr>854/research-associate-in-mult<wbr>i-lingual-automatic-speech-<wbr>recognition/</span></a></p></li><li dir="ltr" style="list-style-type:decimal;font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" style="line-height:1.70526;margin-top:0pt;margin-bottom:0pt"><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Research Associate in Audio-Visual Dubbing Technologies</span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><br class="m_-3786251044212691804m_7836190032590322808gmail-kix-line-break"></span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">To work on methods that allow to increase the productivity of Movie Voice Dubbing methodology through advanced speech technology. The aim is to develop and implement methods that can help media specialists in foreign language dubbing to produce spoken translations that fit perfectly to the visual content.  The project work is in conjunction with Zoo Digital who provide access to rich and very large media archives. </span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><br class="m_-3786251044212691804m_7836190032590322808gmail-kix-line-break"></span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">The position is funded by the Innovate UK (UKRI) project MAUDIE . Salaries are at University of Sheffield grade scales 7.1 - 7.9, and the opening is for 2.5 years. </span></p></li><li dir="ltr" style="list-style-type:decimal;font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" style="line-height:1.70526;margin-top:0pt;margin-bottom:12pt"><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Research Associate in Emotion and Speech Attribute Recognition </span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><br class="m_-3786251044212691804m_7836190032590322808gmail-kix-line-break"></span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">To work on unified methodologies to extract information from speech that allows to infer a variety of speech properties relating to the emotional state. Methods should be robust and the work should keep further speech attributes in mind. </span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><br class="m_-3786251044212691804m_7836190032590322808gmail-kix-line-break"></span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">The position is funded by the Innovate UK (UKRI) project MAUDIE and by IIKE in collaboration with Emotech Ltd. Salaries are at University of Sheffield grade scales 7.1 - 7.9, and the opening is for 12 months. </span></p></li></ol><h1 dir="ltr" style="line-height:1.70526;margin-top:10pt;margin-bottom:0pt;margin-left:18pt;padding:0pt 0pt 0pt 18pt"><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Research Studentships </span></h1><br><ol style="margin-top:0pt;margin-bottom:0pt"><li dir="ltr" style="list-style-type:decimal;font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" style="line-height:1.70526;margin-top:0pt;margin-bottom:0pt"><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">PhD in Multilingual Speech Recognition </span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><br class="m_-3786251044212691804m_7836190032590322808gmail-kix-line-break"></span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">To work on novel deep learning methods for building recognition systems that can operate in several languages, methods for adapting recognition systems to languages with minimal data, methods that allow automatic inference of language attributes, or methods of unsupervised adaptation to new languages. </span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><br class="m_-3786251044212691804m_7836190032590322808gmail-kix-line-break"></span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Funding for 3 years, in the context of the Voicebase Centre for Speech and Language Technologies. </span></p></li><li dir="ltr" style="list-style-type:decimal;font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" style="line-height:1.70526;margin-top:0pt;margin-bottom:0pt"><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">PhD in Direct waveform processing for noise robustness and multi-channel integration</span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><br class="m_-3786251044212691804m_7836190032590322808gmail-kix-line-break"></span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">To work on robust methods for direct recognition from the raw waveforms - to achieve robustness against environmental effects. Work should focus on telephony, but further multichannel input is of interest. </span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><br class="m_-3786251044212691804m_7836190032590322808gmail-kix-line-break"></span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Funding for 3 years, in the context of the Voicebase Centre for Speech and Language Technologies. </span></p></li><li dir="ltr" style="list-style-type:decimal;font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><p dir="ltr" style="line-height:1.70526;margin-top:0pt;margin-bottom:12pt"><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">PhD in Integrated methods for diarisation </span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><br class="m_-3786251044212691804m_7836190032590322808gmail-kix-line-break"></span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Diarisation is the process of recognising who speaks when, with as little prior knowledge as possible. To this day it is implemented as a multi-stage process. Research should focus on integrating what is now typically 4 stages of processing into an end to end . </span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap"><br class="m_-3786251044212691804m_7836190032590322808gmail-kix-line-break"></span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Funded by UKRI for 3.5 years. Funding details to be announced soon. </span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">    </span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">    </span><span style="font-size:8pt;font-family:"Courier New";color:rgb(34,34,34);background-color:rgb(255,255,255);font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">    </span></p></li></ol><br></div>
</div><br></div>
</div><br></div>