<div dir="ltr"><div class="gmail_quote"></div><div class="gmail_quote"><div dir="ltr"><div class="gmail_quote"><div><div class="gmail_quote">
<div>
<div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">Dear all,
<div><div style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
<br>
</div>
<span style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">could you please share to anybody who might be interested in the following internship position ?</span></div><div><span style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"><br></span></div><div><span style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">---</span></div><div><span style="font-family:Calibri,Arial,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"><br></span></div><div><div>ENSTA, IP Paris is looking to hire a talented master student in machine learning on a
collaborative project with Ecole Polytechnique</div><div><br></div><div>Laboratory: U2IS, ENSTA Paris (<a href="http://u2is.ensta-paris.fr/" target="_blank">http://u2is.ensta-paris.fr/</a>) & LIX, Ecole Polytechnique<br>The intern will be part of the laboratory U2IS of ENSTA Paris and will collaborate with LIX, Ecole Polytechnique<br><br>Duration: 6 months, flexible dates<br><br>Contact : NGUYEN Sao Mai : <a href="mailto:nguyensmai@gmail.com" target="_blank">nguyensmai@gmail.com</a><br><br>Context:</div><div>Fully autonomous robots have the potential to impact real-life applications, like assisting elderly people. Autonomous robots must deal with uncertain and continuously changing environments, where it is not possible to program the robot tasks. Instead, the robot must continuously learn new tasks and how to perform more complex tasks combining simpler ones (i.e., a task hierarchy). This problem is called lifelong learning of hierarchical tasks [5]. Hierarchical Reinforcement Learning (HRL) is a recent approach for learning to solve long and complex tasks by decomposing them into simpler subtasks. HRL could be regarded as an extension of the standard Reinforcement Learning (RL) setting as it features high-level agents selecting subtasks to perform and low-level agents learning actions or policies to achieve them.<br><br>Summary:<br>This internship studies the applications of Hierarchical Reinforcement Learning methods in robotics: Deploying autonomous robots in real world environments typically introduces multiple difficulties among which is the size of the observable space and the length of the required tasks.
<br>Reinforcement Learning typically helps agents solve decision making problems by autonomously discovering successful behaviours and learning them. But these methods are known to struggle with long and complex tasks. Hierarchical Reinforcement Learning extend this paradigm to decompose these problems into easier subproblems with High-level agents determining which subtasks need to be accomplished, and Low-level agent learning to achieve them.
<br>During this internship, the intern will :
<br>• Get acquainted with the state of art in Hierarchical Reinforcement Learning including the most notable algorithms [1, 2, 3], the challenges they solve and their limitations.<br>• Reimplement some of these approaches and validate their results in robotics simulated environments such as iGibson [4].<br>• Establish an experimental comparison of these methods with respect to some research hypothesis.<br>The intern is expected to also collaborate with a PhD student whose work is closely related to this topic.<br><br>References:<br>[1] Nachum, O.; Gu, S.; Lee, H.; and Levine, S. 2018. Data- Efficient Hierarchical Reinforcement Learning. In Bengio, S.; Wallach, H. M.; Larochelle, H.; Grauman, K.; Cesa- Bianchi, N.; and Garnett, R., eds., Advances in Neural Infor- mation Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montre ́al, Canada, 3307–3317.<br>[2] Kulkarni, T. D.; Narasimhan, K.; Saeedi, A.; and Tenen- baum, J. 2016. Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation. In Lee, D.; Sugiyama, M.; Luxburg, U.; Guyon, I.; and Garnett, R., eds., Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc.<br>[3] Vezhnevets, A. S.; Osindero, S.; Schaul, T.; Heess, N.; Jaderberg, M.; Silver, D.; and Kavukcuoglu, K. 2017. FeU- dal Networks for Hierarchical Reinforcement Learning. CoRR, abs/1703.01161.<br>[4] Chengshu Li, Fei Xia, Roberto Mart ́ın-Mart ́ın, Michael Lingelbach, Sanjana Srivastava, Bokui Shen, Kent Vainio, Cem Gokmen, Gokul Dharan, Tanish Jain, Andrey Kurenkov, C. Karen Liu, Hyowon Gweon, Jiajun Wu, Li Fei-Fei, and Silvio Savarese. igibson 2.0: Object-centric simulation for robot learning of everyday household tasks, 2021. URL <a href="https://arxiv.org/abs/2108.0327" target="_blank">https://arxiv.org/abs/2108.0327</a></div><div>[5] Nguyen, S. M., Duminy, N., Manoury, A., Duhaut, D., and Buche, C. (2021). Robots Learn Increasingly Complex Tasks with Intrinsic Motivation and Automatic Curriculum Learning. KI - Künstliche Intelligenz, 35(81-90). <br></div><div><br></div><div><br></div></div></div></div></div></div></div></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
</blockquote></div><br clear="all"><div><div dir="ltr" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><font face="arial, helvetica, sans-serif">Nguyen Sao Mai<br></font><font style="background-color:rgb(255,255,255)" size="1" face="times new roman, serif" color="#666666"><a href="mailto:nguyensmai@gmail.com" target="_blank"><font>nguyensmai@gmail.com</font></a><br><span style="border-collapse:separate">Researcher in Cognitive Developmental Robotics</span></font><div><font size="1" face="times new roman, serif" color="#666666"><a href="http://nguyensmai.free.fr" target="_blank"><font color="#666666"><font size="1"><font face="times new roman, serif"></font></font></font></a><font color="#666666"><font size="1"><font face="times new roman, serif"><a href="https://doi.org/10.1155/2022/5667223" target="_blank">https://doi.org/10.1155/2022/5667223</a> <br></font></font></font></font></div><div><font size="1" face="times new roman, serif" color="#666666"><a href="http://nguyensmai.free.fr" target="_blank">http://nguyensmai.free.fr </a>| <a href="http://www.youtube.com/user/nguyensmai" target="_blank">Youtube</a> | <a href="https://twitter.com/nguyensmai" target="_blank">Twitter</a> | <a href="https://www.researchgate.net/profile/Sao_Mai_Nguyen" target="_blank">ResearchGate</a> | <a href="https://hal.inria.fr/search/index/?q=%2A&authIdHal_s=sao-mai-nguyen&sort=producedDate_tdate+desc" target="_blank">Hal </a><br></font></div><div><br></div><div><br></div></div></div></div></div></div></div></div></div>