Connectionists: Call for NeurIPS 2025 Continual and Compatible Foundation Model Updates Workshop
Jessica Echterhoff
jechterh at ucsd.edu
Fri Aug 8 11:34:54 EDT 2025
Dear Connectionists Community,
We’re excited to announce the 1st edition of NeurIPS 2025 Workshop "AI That Keeps Up: Continual and Compatible Foundation Model Updates (CCFM)", taking place in San Diego on December 6–7, 2025.
Website: https://sites.google.com/view/ccfm-neurips2025
Call for papers: https://sites.google.com/view/ccfm-neurips2025/call-for-papers
Submission deadline: August 22, 2025, AoE.
Topics of interest
Foundation models, despite their impressive capabilities, face a critical challenge: they naturally become outdated. Trained on vast datasets, frequently updating these models is expensive. Crucially, these challenges extend beyond the scope of studies in traditional continual learning, as foundation models require rapid and scalable adaptation to dynamic global changes and the emergence of both generalized and specialized tasks. This workshop addresses the urgent need for up-to-date foundation models. We invite researchers to explore cost-effective methods for frequent updates and adaptation, minimizing forgetting and deterioration, ensuring a consistent user experience, and designing dynamic evaluations that remain relevant as models evolve.
Below we list some of the topics and questions that this workshop seeks to answer.
Theoretical foundations of training FMs. What are theoretical guarantees for generalization on naturally evolving distributions? Is there a theoretical tradeoff between learning and forgetting on natural data? How can we model the knowledge storage and update by FMs during pretraining, fine-tuning, merging, and adaption?
Continual pretraining, fine-tuning, merging, and adaptation methods for FMs. How can we efficiently update FMs on generic pretraining data to generalize to new data? Examples include, continual learning methods for training FMs at large-scale such as new optimization methods, training strategies, and model merging.
Compatible pretraining, fine-tuning, merging, and adaptation methods for FMs. Can we avoid making incorrect answers on a question where an older generation of model gives correct answers? Contributions include defining compatibility metrics, model update mechanisms, and training interventions to balance compatibility and generalization tradeoff.
Understanding temporal shift in FMs and data. How can we characterize the rate of distribution shift for generic pretraining data and web data as well as domain-specific and task-specific data? Examples include, data analysis for identifying fast and slow changing data as well as proposing specialized continual learning strategies for specific rates of change such as daily, monthly, or yearly.
Empirical investigation of forgetting and forward transfer in FMs. How can we avoid forgetting old data and deterioration on previous tasks while continually updating FMs? Examples include, learning methods to avoid catastrophic forgetting dataset mixing and replay as well as regularization methods.
Knowledge editing and unlearning in FMs. How should FMs selectively forget information such as personal information?
Developing dynamic benchmarks and evaluations for FMs. How should evaluations and benchmarks for foundation models change over time? Examples include, designing dynamic evaluations and benchmarks that automatically extend over time.
Designing robust evaluation protocols for using evolving FMs as evaluators. How can we evaluate backward compatibility and robustness when updating FMs? Examples include, evaluation metrics for FMs used in other ML systems as evaluators or data synthesizers, and evaluating FMs when used as assistants.
Submission instructions
To submit your paper, please consider the following instructions and guidelines.
* All contributions should be made via OpenReview. We welcome submissions of original, unpublished material, as well as work that is currently under review (i.e. has been submitted but not yet accepted elsewhere).
* Page limit: Papers should be up to 4 pages, excluding references and supplementary materials.
* Template: Please use the NeurIPS 2025 style files.
* Double blind reviews: authors should anonymize their submissions to ensure a double blind review process.
* LLMs policy: In the preparation of your contributions, the use of LLMs is allowed only as a general-purpose writing assist tool.
* Submission of published work: As noted in NeurIPS workshop guidelines, we discourage submitting work that has been previously published in other conferences on machine learning or related fields. Work that is presented at the main NeurIPS conference should not appear in a workshop, including as part of an invited talk. We welcome submissions of original, unpublished material, as well as work that is currently under review (i.e., has been submitted elsewhere).
Publication. The workshop is non-archival. By default, accepted papers will be made publicly available on OpenReview. Authors can choose to opt out if they do not wish for their work to be shared publicly.
Reviewing. Authors should nominate at least one person per contribution as a reviewer. The expected reviewing load is 2-3 papers. If you'd like to nominate someone as a reviewer or self-nominate, please fill in this form.
Attending the workshop. Our workshop is primarily an in-person event, and authors are asked to present a poster at the workshop if possible. A subset of papers will be selected for presentation in short spotlight talks.
Dual Submission policy. We accept dual submission with other workshops that do not happen during the same date.
Awards. We will be giving best paper awards.
Tentative Timeline
Submission open: July 22, 2025, AoE (Open Now).
Submission deadline: August 22, 2025, AoE.
Reviews due: September 12, 2025, AoE.
Decision notification: on or before September 22, 2025.
For any questions, please contact ccfm-neurips2025 at googlegroups.com.
Looking forward to your contribution to CCFM 2025.
Best Regards,
NeurIPS 2025 CCFM Organizers
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20250808/f2bd12d1/attachment.html>
More information about the Connectionists
mailing list