Connectionists: Final CfP: Challenges & Perspectives in Creating Large Language Models

Matthias Gallé mgalle at gmail.com
Sun Feb 20 20:53:55 EST 2022


*****Submission Deadline: Feb 28th*****

*Call for Papers: Workshop on Challenges & Perspectives in Creating Large
Language Models*
May 27th 2022 (w/ ACL)
https://bigscience.huggingface.co/acl-2022

Two years after the appearance of GPT-3, large language models seem to have
taken over NLP. Their capabilities, limitations, societal impact and the
potential new applications they unlocked have been discussed and debated at
length. A handful of replication studies have been published since then,
confirming some of the initial findings and discovering new limitations.
This workshop aims to gather researchers and practitioners involved in the
creation of these models in order to:

1. Share ideas on the next directions of research in this field, including
– but not limited to – grounding, multi-modal models, continuous updates
and reasoning capabilities.
2. Share best-practices, brainstorm solutions to identified limitations and
discuss challenges, such as:

   - ‍*Infrastructure*. What are the infrastructure and software challenges
   involved in scaling models to billions or trillions of parameters, and
   deploying training and inference on distributed servers when each model
   replicas is itself larger than a single node capacity?
   - ‍*Data*. While the self-supervised setting dispenses with human
   annotation, the importance of cleaning, filtering and the bias and
   limitation in existing or reported corpora has become more and more
   apparent over the last years.
   - ‍*Ethical & Legal frameworks*. What type of data can/should be used,
   what type of access should be provided, what filters are or should be
   necessary?
   - ‍*Evaluation*. Investigating the diversity of intrinsic and extrinsic
   evaluation measures, how do they correlate and how the performances of a
   very large pretrained language model should be evaluated.
   - *‍Training efficiency.* Discussing the practical scaling approaches,
   practical questions around large scale training hyper-parameters and
   early-stopping conditions. Discussing measures to reduce the associated
   energy consumption.


This workshop is organized by the BigScience
<https://bigscience.huggingface.co/> initiative and will also serve as the
closing session of this one year-long initiative aimed at developing a
multilingual large language model, which is currently gathering 900
researchers from more than 60 countries and 250 institutions. Its goal is
to investigate the creation of a large scale dataset and model from a very
wide diversity of angles.


*Submissions*
We call for relevant contributions, either in long (8 pages) or short (4
pages) format. Accepted papers will be presented during a poster session.
Submissions can be archival or non-archival.
Submissions should be made via OpenReview (https
://openreview.net/group?id=aclweb.org/ACL/2022/Workshop/BigScience).

*Dates*
Feb. 28, 2022: Submission Deadline
‍March 26, 2022: Notification of Acceptance
‍April 10, 2022: Camera-ready papers due
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220220/39023e29/attachment.html>


More information about the Connectionists mailing list