<div dir="ltr"><div><p class="MsoNormal" style="font-size:12.8px"><span style="color:black">Hi,</span></p><p class="MsoNormal" style="font-size:12.8px"><span style="color:black"> <u></u><u></u></span></p><p class="MsoNormal" style=""><i style="font-size:12.8px"><span style="color:black">I’d like to share the call for papers for our ICML 2020 Workshop on </span></i><span style="font-size:12.8px">Inductive Biases, Invariances and Generalization in RL</span><i style="font-size:12.8px"><span style="color:black"> (BIG@ICML). </span></i><i style="font-size:12.8px"><span style="color:black">Openreview link:  </span></i><a href="https://openreview.net/group?id=ICML.cc/2020/Workshop/BIG" style="font-size:12.8px">https://openreview.net/group?id=ICML.cc/2020/Workshop/BIG</a><i style="font-size:12.8px;color:rgb(0,0,0)"> Workshop website: </i><a href="https://biases-invariances-generalization.github.io/" style="font-size:small">https://biases-invariances-generalization.github.io/</a>.<span style="font-size:12.8px;color:black"> </span></p><p class="MsoNormal" style="font-size:12.8px"><b><span style="color:black"><br></span></b></p><p class="MsoNormal" style="font-size:12.8px"><b><span style="color:black">TLDR:</span></b><span style="color:black"> </span><span style="color:black"> <u></u><u></u></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="color:black"><br></span></p><p class="MsoNormal" style="font-size:12.8px"><span style="background-color:transparent;color:rgb(33,33,33);font-family:Arial;font-size:11.5pt;white-space:pre-wrap">The question of generalization in reinforcement learning is essential to the field’s future both in theory and in practice. However there are still open questions about the right way to think about generalization in RL, the right way to formalize the problem, and the most important tasks. This workshop would help to address this issue by bringing together researchers from different backgrounds to discuss these challenges. In our workshop we hope to explore research and new ideas on topics related to inductive biases, invariances and generalization, including: </span></p><p class="MsoNormal" style="font-size:12.8px"><span style="background-color:transparent;color:rgb(33,33,33);font-family:Arial;font-size:11.5pt;white-space:pre-wrap"><br></span></p><p class="MsoNormal" style="font-size:12.8px"><span id="gmail-docs-internal-guid-ad61f0ca-7fff-1dad-df43-3569dddd387a"></span></p><ul style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="list-style-type:disc;font-size:13.5pt;font-family:Arial;color:rgb(33,37,41);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre;margin-left:15pt"><p dir="ltr" style="line-height:1.38;margin-right:15pt;background-color:rgb(255,255,255);margin-top:0pt;margin-bottom:0pt"><span style="font-size:13.5pt;font-family:Arial;color:rgb(33,37,41);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">W</span><span style="font-size:11.5pt;font-family:Arial;color:rgb(33,33,33);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">hat are efficient ways to learn inductive biases from data?</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:13.5pt;font-family:Arial;color:rgb(33,37,41);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre;margin-left:15pt"><p dir="ltr" style="line-height:1.38;margin-right:15pt;background-color:rgb(255,255,255);margin-top:0pt;margin-bottom:0pt"><span style="font-size:11.5pt;font-family:Arial;color:rgb(33,33,33);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Which inductive biases are most suitable to achieve generalization?</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:13.5pt;font-family:Arial;color:rgb(33,37,41);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre;margin-left:15pt"><p dir="ltr" style="line-height:1.38;margin-right:15pt;background-color:rgb(255,255,255);margin-top:0pt;margin-bottom:0pt"><span style="font-size:11.5pt;font-family:Arial;color:rgb(33,33,33);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Can we make the problem of generalization in particular for RL more concrete and figure out standard terms for discussing the problem?</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:13.5pt;font-family:Arial;color:rgb(33,37,41);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre;margin-left:15pt"><p dir="ltr" style="line-height:1.38;margin-right:15pt;background-color:rgb(255,255,255);margin-top:0pt;margin-bottom:0pt"><span style="font-size:11.5pt;font-family:Arial;color:rgb(33,33,33);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Causality and generalization especially in RL</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:13.5pt;font-family:Arial;color:rgb(33,37,41);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre;margin-left:15pt"><p dir="ltr" style="line-height:1.38;margin-right:15pt;background-color:rgb(255,255,255);margin-top:0pt;margin-bottom:0pt"><span style="font-size:11.5pt;font-family:Arial;color:rgb(33,33,33);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Model-based RL and generalization.</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:13.5pt;font-family:Arial;color:rgb(33,37,41);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre;margin-left:15pt"><p dir="ltr" style="line-height:1.38;margin-right:15pt;background-color:rgb(255,255,255);margin-top:0pt;margin-bottom:0pt"><span style="font-size:11.5pt;font-family:Arial;color:rgb(33,33,33);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Can we create models that are robust visual environments, assuming all the underlying mechanics are the same. Should this count as generalization or transfer learning?</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:13.5pt;font-family:Arial;color:rgb(33,37,41);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre;margin-left:15pt"><p dir="ltr" style="line-height:1.38;margin-right:15pt;background-color:rgb(255,255,255);margin-top:0pt;margin-bottom:0pt"><span style="font-size:11.5pt;font-family:Arial;color:rgb(33,33,33);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">Can we create a theoretical understanding of generalization in RL, and understand how it is related to the well developed ideas from statistical learning theory ?</span></p></li><li dir="ltr" style="list-style-type:disc;font-size:13.5pt;font-family:Arial;color:rgb(33,37,41);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre;margin-left:15pt"><p dir="ltr" style="line-height:1.38;margin-right:15pt;background-color:rgb(255,255,255);margin-top:0pt;margin-bottom:30pt"><span style="font-size:11.5pt;font-family:Arial;color:rgb(33,33,33);background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap">What is the difference between a prediction that is made with a causal model and that with a non‐causal model?</span></p></li></ul><p id="gmail-m_-4731390898648918588gmail-m_-4762111525990004790gmail-m_-7375365895457821852gmail-m_-9032409063768286085gmail-m_-2062849325793277360m_7236721635789291812h.p_5XmFxHrcbkDJ" style="font-size:12.8px"><span style="color:black">We will accept both short paper (4 pages) and long paper (8 pages) submissions (not including references). </span><font color="#000000">A few </font><span style="color:black">papers</span><font color="#000000"> may be selected as oral presentations, and the other accepted </font><span style="color:black">papers</span><font color="#000000"> will be presented in a poster session. There will be no proceedings for this </font><span style="color:black">workshop</span><font color="#000000">, however, upon the author’s request, accepted contributions will be made available in the </font><span style="color:black">workshop</span><font color="#000000"> website. Submission are double-blind, peer-reviewed on OpenReview (</font><span style="font-size:small"><a href="https://openreview.net/group?id=ICML.cc/2020/Workshop/BIG">https://openreview.net/group?id=ICML.cc/2020/Workshop/BIG</a></span><span style="font-size:12.8px">)</span><span style="font-size:12.8px;color:rgb(0,0,0)">, and open to already published work. </span></p><p style="font-size:12.8px"><b><span style="color:black">Paper Submission Deadline: Jun 10<sup>th</sup></span></b><span style="color:black"><u></u><u></u></span></p><p style="font-size:12.8px"><b style="font-size:12.8px"><span style="color:black">Website</span></b><span style="font-size:12.8px;color:black">: </span><span style="font-size:small"><a href="https://biases-invariances-generalization.github.io/">https://biases-invariances-generalization.github.io/</a></span></p><p style="font-size:12.8px"><span style="font-size:12.8px">Best,</span></p><p style="font-size:12.8px">ICML 2020 BIG workshop organizers</p></div><div><br></div><div><br></div><div><br></div><div><br></div><br></div>