Connectionists: [CFP] [Extended Deadline] WSDM 2026 Workshop on Benchmarking Causal Models (CausalBench’26)

Pratanu Mandal pmandal5 at asu.edu
Wed Nov 19 16:01:27 EST 2025


===============
CALL FOR PAPERS
===============

-------------------------------------------------------------------
⚠ Paper submission deadline has been extended to November 30, 2025
-------------------------------------------------------------------


WSDM 2026 Workshop on Benchmarking Causal Models (CausalBench’26)

[Location]
BOISE, Idaho, USA (in conjunction with the ACM International Conference on
Web Search and Data Mining – WSDM 2026)

[Date]
February 26, 2026

[Website]
https://wsdm26.causalbench.org/



OVERVIEW
========

The WSDM Workshop on Benchmarking Causal Models (CausalBench) aims to
promote scientific collaboration, reproducibility, and fairness in causal
learning research by providing a dedicated venue for work on benchmarking
data, algorithms, models, and metrics for causal learning. CausalBench
addresses the growing need for unified, publicly available, and
configurable benchmarks that support causal discovery, causal effect
estimation, and more general causal inference and learning research
problems (e.g., A/B testing, experimental design, mechanistic
interpretability, causal reasoning and causal RL etc.) across diverse
applications, such as web search, data mining, public health, and
sustainability.

Standardized evaluation has historically driven progress in machine
learning, as seen with UCI ML and KDD repositories, by encouraging
collaborative research and reproducible science. The causal learning
community now faces similar challenges: lack of unified benchmark datasets,
algorithms, and metrics for reproducible evaluation. CausalBench workshop
aims to
- help identify existing datasets and metrics for causal learning and
integrate them into standardized evaluation protocols,
- encourage coverage, calibration, and uncertainty reporting for causal
estimates,
- develop ontologies for benchmarking, improving transparency and
collaboration,
- address challenges of incomplete causal knowledge and integration of
heterogeneous datasets, and
- help define evaluation standards to scientifically quantify progress in
causal learning.

The workshop will bring together researchers and practitioners to discuss
new algorithms, datasets, and evaluation methodologies that help establish
trust in causal learning innovation. Our goal is to foster discussion and
community practices that make evaluation more transparent and comparable
across different causal tasks—e.g., clarifying task taxonomies, surfacing
assumption-linked metrics, and sharing accessible benchmark resources and
artifacts.

By encouraging open exchange on datasets and metrics, the workshop aims to
catalyze incremental, evidence-based improvements to causal evaluation.



TOPICS OF INTEREST
==================

CausalBench welcomes submissions in the following research and application
areas:

[Benchmarking and Evaluation]
Software frameworks, datasets, standard workflows/pipelines, and metrics
for evaluating causal learning algorithms.

[Algorithmic Advances]
Novel causal discovery and causal inference models/algorithms with
reproducible benchmarking results.

[Data and Systems]
Open-source platforms for data exchange, (automatic) model evaluation, and
reproducing results for any causality related research problems: e.g.,
causal inference, causal discovery, causal representation learning, and
causal recommendation.

[Trustworthy AI]
Causality-inspired methods, datasets, or metrics for benchmarking any
aspect of trustworthiness of various AI systems and methods, including
interpretability, safety, robustness, bias, and fairness.

[Applications]
Real-world demonstrations of causal benchmarking in domains, such as
healthcare, finance, sustainability, and social systems—with a particular
emphasis on applications in web search and data mining.

Additional thematic sessions (e.g., invited talks, panels) will be held on
emerging challenges in causal benchmarking.



SUBMISSION GUIDELINES
=====================

[Submission Site]
https://easychair.org/my/conference?conf=causalbench26

[Format]
Submissions must be formatted according to the ACM SIG Proceedings Template
double-column format, with a font size no smaller than 9pt.

[Length]
We invite submissions of extended abstracts (2-3 pages, excluding
references) and research articles (4-6 pages, excluding references) that
align with the workshop's themes.

[File Type]
PDF, maximum file size 10 MB.

[Review Process]
Single-blind review.

[Accepted Papers]
All accepted papers will be presented at the workshop and included in the
official WSDM Companion Proceedings Volume.



ARTIFACTS AND REPRODUCIBILITY
=============================

Submissions are encouraged to emphasize reproducibility, benchmark
availability, and evaluation methodology. Authors are encouraged to make
public and include links to code, datasets, experimental setups, and other
supporting materials. While not required, the authors are also encouraged
to share the relevant artifacts (data, model, metric, and benchmark runs)
on CausalBench’s repositories.



IMPORTANT DATES
===============

All deadlines are at 11:59 PM (Anywhere on Earth) unless otherwise noted.

[Paper Submission Deadline (extended)]
✗ Original: November 13, 2025
✓ Extended: November 30, 2025

[Author Notification]
December 18, 2025

[Camera-ready Deadline]
TBD

[Workshop Date]
February 26, 2026



ORGANIZERS
==========

[General Chairs]
K. Selçuk Candan, Arizona State University (Email: candan at asu.edu)
Huan Liu, Arizona State University (Email: huanliu at asu.edu)

[Program Chairs]
Ruocheng Guo, Intuit AI Research (Email: ruocheng_guo at intuit.com)
Paras Sheth, Amazon (Email: parshet at amazon.com)

[Web Chair]
Ahmet Kapkiç, Arizona State University (Email: akapkic at asu.edu)

[Publicity Chair]
Pratanu Mandal, Arizona State University (Email: pmandal5 at asu.edu)



DUPLICATE SUBMISSIONS AND NOVELTY REQUIREMENTS
==============================================

All submissions will undergo a rigorous peer-review process to ensure
quality and originality. Submissions must present original work not under
review elsewhere. Concurrent submission to other venues is not permitted.
Papers must cite prior work appropriately, including authors’ own related
publications. The submitted paper must substantially differ from earlier
workshop papers by the same authors.



INCLUSION AND DIVERSITY
=======================

CausalBench embraces the values of diversity and inclusion in writing,
participation, and representation. Authors should use inclusive language
and examples that avoid stereotyping or marginalization of any group.



CONFLICTS OF INTEREST
=====================

Authors must declare any conflicts of interest with organizers or reviewers
(e.g., recent collaborations, shared affiliations, advisor/advisee
relationships). Submissions with incorrect conflict declarations are
subject to rejection.



ACM PUBLICATIONS POLICY ON RESEARCH INVOLVING HUMAN PARTICIPANTS AND
SUBJECTS
=============================================================================

As a published ACM author, you and your co-authors are subject to all ACM
Publications Policies, including ACM's new Publications Policy on Research
Involving Human Participants and Subjects.



POLICY ON AUTHORSHIP REQUIREMENTS and GenAI
===========================================

We follow the ACM policy on authorship requirements. Specifically on the
use of generative AI tools and technologies, the guidelines note that: "The
use of generative AI tools and technologies to create content is permitted
but must be fully disclosed in the Work. For example, the authors could
include the following statement in the Acknowledgements section of the
Work: ChatGPT was utilized to generate sections of this Work, including
text, tables, graphs, code, data, citations, etc.). If you are uncertain
about the need to disclose the use of a particular tool, err on the side of
caution, and include a disclosure in the acknowledgements section of the
Work."



CONTACT INFORMATION
===================

For questions or clarifications, please contact the organizers at
wsdm26 at causalbench.org.



ACKNOWLEDGEMENTS
================

We thank all the contributors and the community for their continuous
support and feedback in making CausalBench a reliable and valuable resource
for causal learning research.

This workshop is funded by NSF Grant 2311716, "CausalBench: A
Cyberinfrastructure for Causal-Learning Benchmarking for Efficacy,
Reproducibility, and Scientific Collaboration", and NSF Grants #2230748,
"PIRE: Building Decarbonization via AI-empowered District Heat Pump
Systems", #2412115, "PIPP Phase II: Analysis and Prediction of Pandemic
Expansion (APPEX)" and USACE #GR40695, "Designing nature to enhance
resilience of built infrastructure in western US landscapes".
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20251119/c1162fc9/attachment.html>


More information about the Connectionists mailing list