Connectionists: The 3rd Sparse NN workshop [at ICLR'23] - Call For Papers!
Baharan Mirzasoleiman
b.mirzasoleiman at gmail.com
Sat Jan 7 01:46:32 EST 2023
Dear all,
We are excited to announce the third iteration of our workshop “Sparsity in
Neural Networks: On practical limitations and tradeoffs between
sustainability and efficiency”, at ICLR’23 continuing on its successful
inaugural version in 2021 and 2022. The workshop will take place in a
hybrid manner on May 5th, 2023 in Kigali, Rwanda.
The goal of the workshop is to bring together members of many communities
working on neural network sparsity to share their perspectives and the
latest cutting-edge research. We have assembled an incredible group of
speakers, and we are seeking contributed work from the community.
For more information and submitting your paper, please visit the workshop
website: https://www.sparseneural.net/
Important Dates
-
February 3th, 2023 [AOE]: Submit an abstract and supporting materials
-
March 3th, 2023: Notification of acceptance
-
May 5, 2023: Workshop
Topics (including but not limited to)
-
Algorithms for Sparsity
-
Pruning both for post-training inference, and during training
-
Algorithms for fully sparse training (fixed or dynamic), including
biologically inspired algorithms
-
Algorithms for ephemeral (activation) sparsity
-
Sparsely activated expert models
-
Scaling laws for sparsity
-
Sparsity in deep reinforcement learning
-
Systems for Sparsity
-
Libraries, kernels, and compilers for accelerating sparse computation
-
Hardware with support for sparse computation
-
Theory and Science of Sparsity
-
When is overparameterization necessary (or not)
-
Optimization behavior of sparse networks
-
Representation ability of sparse networks
-
Sparsity and generalization
-
The stability of sparse models
-
Forgetting owing to sparsity, including fairness, privacy and bias
concerns
-
Connecting neural network sparsity with traditional sparse dictionary
modeling
-
Applications for Sparsity
-
Resource-efficient learning at the edge or the cloud
-
Data-efficient learning for sparse models
-
Communication-efficient distributed or federated learning with sparse
models
-
Graph and network science applications
This workshop is non-archival, and it will not have proceedings.
Submissions will receive one of three possible decisions:
-
Accept (Spotlight Presentation). The authors will be invited to present
the work during the main workshop, with live Q&A.
-
Accept (Poster Presentation). The authors will be invited to present
their work as a poster during the workshop’s interactive poster sessions.
-
Reject. The paper will not be presented at the workshop.
Eligible Work
-
The latest research innovations at all stages of the research process,
from work-in-progress to recently published papers, where “recent” refers
to work presented within one year of the workshop, e.g., the manuscript is
first publicly available on arxiv or elsewhere no earlier than February 3,
2022. We permit under-review or concurrent submissions.
-
Position or survey papers on any topics relevant to this workshop (see
above)
Required materials
1.
One mandatory abstract (250 words or fewer) describing the work
2.
Up-to 8 pages in length excluding the references and the appendix, for
both technical and position papers. We encourage work-in-progress
submissions and expect most submissions to be approximately 4 pages. Papers
can be submitted in any of the ICLR, Neurips or ICML conference formats.
We hope you will join us in attendance!
Best Regards,
On behalf of the organizing team (Aleksandra, Atlas, Baharan, Decebal,
Elena, Ghada, Trevor, Utku, Zahra)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230106/0a64aa43/attachment.html>
More information about the Connectionists
mailing list