Connectionists: The 2nd Sparse NN workshop - Call For Papers!

Baharan Mirzasoleiman b.mirzasoleiman at gmail.com
Tue May 10 12:47:32 EDT 2022


Dear all,



We are excited to announce the second iteration of “*Sparsity in Neural
Networks: Advancing Understanding and Practice*”, continuing on its
successful inaugural version in 2021. The workshop will take place online
on July 13th, 2022.



The goal of the workshop is to bring together members of many communities
working on neural network sparsity to share their perspectives and the
latest cutting-edge research. We have assembled an incredible group of
speakers, and *we are seeking contributed work from the community. *



Attendance is free: please register at the workshop website:
https://www.sparseneural.net/



You can submit your paper here:

https://openreview.net/group?id=Sparsity_in_Neural_Networks/2022/Workshop/SNN
(available by May 10th).

Important Dates

   - June 10th, 2022 [AOE]: Submit an abstract and supporting materials
   - June 30th, 2022: Notification of acceptance
   - July 13, 2022: Workshop

Topics (including but not limited to)

   - Algorithms for Sparsity


   - Pruning both for post-training inference, and during training
      - Algorithms for fully sparse training (fixed or dynamic), including
      biologically inspired algorithms
      - Algorithms for ephemeral (activation) sparsity
      - Sparsely activated expert models
      - Scaling laws for sparsity
      - Sparsity in deep reinforcement learning


   - Systems for Sparsity


   - Libraries, kernels, and compilers for accelerating sparse computation
      - Hardware with support for sparse computation


   - Theory and Science of Sparsity


   - When is overparameterization necessary (or not)
      - Optimization behavior of sparse networks
      - Representation ability of sparse networks
      - Sparsity and generalization
      - The stability of sparse models
      - Forgetting owing to sparsity, including fairness, privacy and bias
      concerns
      - Connecting neural network sparsity with traditional sparse
      dictionary modeling


   - Applications for Sparsity


   - Resource-efficient learning at the edge or the cloud
      - Data-efficient learning for sparse models
      - Communication-efficient distributed or federated learning with
      sparse models
      - Graph and network science applications



This workshop is non-archival, and it will not have proceedings. We permit
under-review or concurrent submissions. Submissions will receive one of
three possible decisions:


   - *Accept (Spotlight Presentation)*. The authors will be invited to
   present the work during the main conference, with live Q&A.
   - *Accept (Poster Presentation)*. The authors will be invited to present
   their work as a poster during the workshop’s interactive poster sessions.
   - *Reject*. The paper will not be presented at the workshop.

Eligible Work

   - *The latest research innovations* at all stages of the research
   process, from work-in-progress to recently published papers


   - We define “recent” as presented within one year of the workshop, e.g.,
      the manuscript is first publicly available on arxiv or else no
earlier than
      June 29th, 2021.


   - *Position or survey papers* on any topics relevant to this workshop
   (see above)



*Required materials*

   1. *One mandatory abstract* (250 words or fewer) describing the work
   2. *One or more* *of the following accompanying materials* that describe
   the work in further detail. Higher quality accompanying materials improve
   the likelihood of acceptance and of spotlighting work with an oral
   presentation.


   1. *A poster* (in PDF form) presenting results of work-in-progress.
      2. *A link to a blog post* (e.g., distill.pub, Medium) describing
      results.
      3. *A workshop paper* of approximately four pages in length
      presenting results of work-in-progress. Papers should be submitted using
      the NeurIPS 2021 format.
      4. *A position paper* with no page limit.
      5. *A published paper* in the form that it was published. We will
      only consider papers that were published in the year prior to
this workshop.



We hope you will join us in attendance!



Best Regards,

On behalf of the organizing team (Ari, Atlas, Jonathan, Utku, Baharan,
Siddhant, Elena, Chang, Trevor, Decebal)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220510/a514642d/attachment.html>


More information about the Connectionists mailing list