<div dir="ltr"><p class="MsoNormal">Dear all,<u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal"><span style="font-family:Arial,sans-serif;color:black">We are excited to announce the second iteration of “<b>Sparsity in Neural Networks: Advancing Understanding and Practice</b>”, continuing on its successful inaugural version in 2021. The workshop will take place online on July 13th, 2022.</span></p><div><div><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal"><span style="font-family:Arial,sans-serif;color:black">The goal of the workshop is to bring together members of many communities working on neural network sparsity to share their perspectives and the latest cutting-edge research. We have assembled an incredible group of speakers, and <b>we are seeking contributed work from the community. </b></span><b><u></u><u></u></b></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal"><span style="font-family:Arial,sans-serif;color:black">Attendance is free: please register at the workshop website: </span><a href="https://www.sparseneural.net/" target="_blank"><span style="font-family:Arial,sans-serif">https://www.sparseneural.net/</span></a><u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal"><span style="font-family:Arial,sans-serif;color:black">You can submit your paper here:</span><u></u><u></u></p><p class="MsoNormal"><a href="https://openreview.net/group?id=Sparsity_in_Neural_Networks/2022/Workshop/SNN" target="_blank"><span style="font-family:Arial,sans-serif">https://openreview.net/group?id=Sparsity_in_Neural_Networks/2022/Workshop/SNN</span></a><span style="font-family:Arial,sans-serif;color:black">  (available by May 10th).</span><u></u><u></u></p><p class="MsoNormal" style="margin-right:0in;margin-bottom:6pt;margin-left:0in"><span style="font-size:16pt;font-family:Arial,sans-serif;color:black">Important Dates</span><b><span style="font-size:18pt"><u></u><u></u></span></b></p><ul type="disc" style="margin-bottom:0in;margin-top:0in"><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><span style="font-family:Arial,sans-serif">June 10th, 2022 [AOE]: Submit an abstract and supporting materials<u></u><u></u></span></li><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><span style="font-family:Arial,sans-serif">June 30th, 2022: Notification of acceptance<u></u><u></u></span></li><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><span style="font-family:Arial,sans-serif">July 13, 2022: Workshop<u></u><u></u></span></li></ul><p class="MsoNormal" style="margin-right:0in;margin-bottom:6pt;margin-left:0in"><span style="font-size:16pt;font-family:Arial,sans-serif;color:black">Topics (including but not limited to)</span><b><span style="font-size:18pt"><u></u><u></u></span></b></p><ul type="disc" style="margin-bottom:0in;margin-top:0in"><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Algorithms for Sparsity<u></u><u></u></span></li></ul><ul type="disc" style="margin-bottom:0in;margin-top:0in"><ul type="circle" style="margin-bottom:0in;margin-top:0in"><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Pruning both for post-training inference, and during training<u></u><u></u></span></li><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Algorithms for fully sparse training (fixed or dynamic), including biologically inspired algorithms<u></u><u></u></span></li><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Algorithms for ephemeral (activation) sparsity<u></u><u></u></span></li><li class="MsoNormal" style="margin-left:15px;color:rgb(33,33,33);vertical-align:baseline"><span style="font-size:12pt;font-family:Roboto;color:black">Sparsely activated expert models</span><span style="font-size:13pt;font-family:Roboto"><u></u><u></u></span></li><li class="MsoNormal" style="margin-left:15px;color:rgb(33,33,33);vertical-align:baseline"><span style="font-size:12pt;font-family:Roboto;color:black">Scaling laws for sparsity</span><span style="font-size:13pt;font-family:Roboto"><u></u><u></u></span></li><li class="MsoNormal" style="margin-left:15px;color:rgb(33,33,33);vertical-align:baseline"><span style="font-size:12pt;font-family:Roboto;color:black">Sparsity in deep reinforcement learning</span><span style="font-size:13pt;font-family:Roboto"><u></u><u></u></span></li></ul></ul><ul type="disc" style="margin-bottom:0in;margin-top:0in"><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Systems for Sparsity<u></u><u></u></span></li></ul><ul type="disc" style="margin-bottom:0in;margin-top:0in"><ul type="circle" style="margin-bottom:0in;margin-top:0in"><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Libraries, kernels, and compilers for accelerating sparse computation<u></u><u></u></span></li><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Hardware with support for sparse computation<u></u><u></u></span></li></ul></ul><ul type="disc" style="margin-bottom:0in;margin-top:0in"><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Theory and Science of Sparsity<u></u><u></u></span></li></ul><ul type="disc" style="margin-bottom:0in;margin-top:0in"><ul type="circle" style="margin-bottom:0in;margin-top:0in"><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><span style="font-family:Arial,sans-serif">When is overparameterization necessary (or not)<u></u><u></u></span></li><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Optimization behavior of sparse networks<u></u><u></u></span></li><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Representation ability of sparse networks<u></u><u></u></span></li><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Sparsity and generalization<u></u><u></u></span></li><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><span style="font-family:Arial,sans-serif">The stability of sparse models<u></u><u></u></span></li><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Forgetting owing to sparsity, including fairness, privacy and bias concerns<u></u><u></u></span></li><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Connecting neural network sparsity with traditional sparse dictionary modeling<u></u><u></u></span></li></ul></ul><ul type="disc" style="margin-bottom:0in;margin-top:0in"><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Applications for Sparsity<u></u><u></u></span></li></ul><ul type="disc" style="margin-bottom:0in;margin-top:0in"><ul type="circle" style="margin-bottom:0in;margin-top:0in"><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Resource-efficient learning at the edge or the cloud<u></u><u></u></span></li><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Data-efficient learning for sparse models<u></u><u></u></span></li><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Communication-efficient distributed or federated learning with sparse models <u></u><u></u></span></li><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><span style="font-family:Arial,sans-serif">Graph and network science applications<u></u><u></u></span></li></ul></ul><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal"><span style="font-family:Arial,sans-serif;color:black">This workshop is non-archival, and it will not have proceedings. We permit under-review or concurrent submissions. Submissions will receive one of three possible decisions:</span><br><br><u></u><u></u></p><ul type="disc" style="margin-bottom:0in;margin-top:0in"><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><b><span style="font-family:Arial,sans-serif">Accept (Spotlight Presentation)</span></b><span style="font-family:Arial,sans-serif">. The authors will be invited to present the work during the main conference, with live Q&A.<b><u></u><u></u></b></span></li><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><b><span style="font-family:Arial,sans-serif">Accept (Poster Presentation)</span></b><span style="font-family:Arial,sans-serif">. The authors will be invited to present their work as a poster during the workshop’s interactive poster sessions.<b><u></u><u></u></b></span></li><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><b><span style="font-family:Arial,sans-serif">Reject</span></b><span style="font-family:Arial,sans-serif">. The paper will not be presented at the workshop.<b><u></u><u></u></b></span></li></ul><p class="MsoNormal" style="margin-right:0in;margin-bottom:6pt;margin-left:0in"><span style="font-size:16pt;font-family:Arial,sans-serif;color:black">Eligible Work</span><b><span style="font-size:18pt"><u></u><u></u></span></b></p><ul type="disc" style="margin-bottom:0in;margin-top:0in"><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><b><span style="font-family:Arial,sans-serif">The latest research innovations</span></b><span style="font-family:Arial,sans-serif"> at all stages of the research process, from work-in-progress to recently published papers<u></u><u></u></span></li></ul><ul type="disc" style="margin-bottom:0in;margin-top:0in"><ul type="circle" style="margin-bottom:0in;margin-top:0in"><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><span style="font-family:Arial,sans-serif">We define “recent” as presented within one year of the workshop, e.g., the manuscript is first publicly available on arxiv or else no earlier than June 29th, 2021.<u></u><u></u></span></li></ul></ul><ul type="disc" style="margin-bottom:0in;margin-top:0in"><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><b><span style="font-family:Arial,sans-serif">Position or survey papers</span></b><span style="font-family:Arial,sans-serif"> on any topics relevant to this workshop (see above)<b><u></u><u></u></b></span></li></ul><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal"><b><span style="font-family:Arial,sans-serif;color:black">Required materials</span></b><u></u><u></u></p><ol start="1" type="1" style="margin-bottom:0in;margin-top:0in"><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><b><span style="font-family:Arial,sans-serif">One mandatory abstract</span></b><span style="font-family:Arial,sans-serif"> (250 words or fewer) describing the work<u></u><u></u></span></li><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><b><span style="font-family:Arial,sans-serif">One or more</span></b><span style="font-family:Arial,sans-serif"> <b>of the following accompanying materials</b> that describe the work in further detail. Higher quality accompanying materials improve the likelihood of acceptance and of spotlighting work with an oral presentation.<u></u><u></u></span></li></ol><ol start="2" type="1" style="margin-bottom:0in;margin-top:0in"><ol start="0" type="a" style="margin-bottom:0in;margin-top:0in"><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><b><span style="font-family:Arial,sans-serif">A poster</span></b><span style="font-family:Arial,sans-serif"> (in PDF form) presenting results of work-in-progress.<u></u><u></u></span></li><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><b><span style="font-family:Arial,sans-serif">A link to a blog post</span></b><span style="font-family:Arial,sans-serif"> (e.g., distill.pub, Medium) describing results.<u></u><u></u></span></li><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><b><span style="font-family:Arial,sans-serif">A workshop paper</span></b><span style="font-family:Arial,sans-serif"> of approximately four pages in length presenting results of work-in-progress. Papers should be submitted using the NeurIPS 2021 format.<u></u><u></u></span></li><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><b><span style="font-family:Arial,sans-serif">A position paper</span></b><span style="font-family:Arial,sans-serif"> with no page limit.<u></u><u></u></span></li><li class="MsoNormal" style="margin-left:15px;color:black;vertical-align:baseline"><b><span style="font-family:Arial,sans-serif">A published paper</span></b><span style="font-family:Arial,sans-serif"> in the form that it was published. We will only consider papers that were published in the year prior to this workshop.<u></u><u></u></span></li></ol></ol><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal" style="background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><span style="font-family:Arial,sans-serif;color:black">We hope you will join us in attendance!</span><u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal"><span style="font-family:Arial,sans-serif;color:black">Best Regards,</span><u></u><u></u></p><p class="MsoNormal" style="margin-right:0in;margin-bottom:6pt;margin-left:0in"><span style="font-family:Arial,sans-serif;color:black">On behalf of the organizing team (Ari, Atlas, Jonathan, Utku, Baharan, Siddhant, Elena, Chang, Trevor, Decebal)</span></p></div></div></div>