Connectionists: [CFP] Unifying Concept Representation Learning workshop @ ICLR 2026 - deadline Jan 30
Sara Magliacane
sara.magliacane at gmail.com
Sun Dec 21 03:33:01 EST 2025
Call for Papers for UCRL workshop at ICLR 2026
ICLR 2026 workshop on Unifying Concept Representation Learning
https://ucrl-iclr26.github.io/
Motivation
Several areas at the forefront of AI research are currently witnessing a
convergence of interests around the problem of learning high-quality
concepts from data. Concepts have become a central topic of study in
neuro-symbolic
integration (NeSy). NeSy approaches integrate perception – usually
implemented by a neural backbone – and symbolic reasoning by employing
concepts to glue together these two steps: the latter relies on the
concepts detected by the former to produce suitable outputs. Concepts are
also used in Explainable AI (XAI) by recent post-hoc explainers and
self-explainable architectures as a building block for constructing
high-level justifications of model behavior. Compared to, e.g., saliency
maps, these can portray a more abstract and understandable picture of the
machine’s reasoning process, potentially improving interpretability,
interactivity, and trustworthiness, to the point that concepts have been
called the lingua franca of human-AI interaction.
NeSy and XAI methods hinge on learned concepts being “high-quality”.
Concepts with misaligned semantics may compromise the meaning of model
explanations, out-of-distribution behavior of NeSy architectures and human
understanding of the underlying systems. Recent works propose to leverage
disentangled representations to mitigate concept leakage, i.e., the
presence of irrelevant information in the learned concepts. Causal
Representation Learning (CRL) is a generalization of disentangled
representation learning, when the latent variables are dependent on each
other, e.g., due to causal relations.
The potential of leveraging CRL to learn more robust and leak-proof
concepts is an emerging area of research with a growing number of
approaches, but many open questions remain. In particular, what properties
high-quality concepts should satisfy is unclear. Moreover, despite studying
the same underlying object, research in NeSy, XAI and CRL is proceeding on
mostly independent tracks, with minimal knowledge transfer. Separate
branches differ in their working definitions of what concepts are and what
desiderata they ought to satisfy, on what data and algorithms they should
be learned with, and on how to properly assess their quality. This also
means that approaches in one area often ignore insights from the others. As
a result, the central issue of how to properly learn and evaluate concepts
is largely unanswered.
The aim of this ICLR 2026 workshop is to bring together researchers from
NeSy, XAI and CRL and from both industry and academia, who are interested
in learning robust, semantically meaningful concepts. We welcome
submissions on the following topics:
-
Foundations of concept representations and learning in XAI, CRL and NeSy.
-
Supervised and unsupervised techniques for learning concepts from
observational and interventional data, raw inputs, and pre-trained
embeddings.
-
Techniques for learning concepts in non-standard settings, e.g., causal
abstraction.
-
Design and evaluation of concept-based XAI techniques and
self-explainable concept-based models.
-
Interactive human-machine concept acquisition and alignment.
-
Applications of concept-based AI systems, including but not limited to,
reasoning, causality, formal verification, interactive learning, and
explainability.
-
Metrics and evaluation techniques for assessing the quality of learned
concepts, with a focus on down-stream applications.
Important Dates & Details
Paper submission deadline: January 30th, 2026 23:59 AoE
Notification to authors: March 1st, 2026 23:59 AoE
Workshop date: April 26 or 27, 2026
Workshop location: Rio de Janeiro, Brazil
Submission instructions
We invite submissions on on-going research that have not yet been published
in a venue with proceedings. While we welcome unfinished work, submissions
in this track should contain original ideas, new connections between
research fields, or novel results. Submissions should be formatted using
the ICML latex template and formatting instructions. Papers should be up to
6 pages in length, including all main results, figures, and tables.
Appendices containing additional details are allowed, but reviewers are not
expected to take this into account.
Submission Link:
https://openreview.net/group?id=ICLR.cc/2026/Workshop/UCRL#tab-your-consoles
Organization committee
-
Amit Dhurandhar (IBM Research)
-
Amir-Hossein Karimi (U Waterloo)
-
Sara Magliacane (U Amsterdam)
-
Stefano Teso (U Trento)
-
Efthymia Tsamoura (Huawei Research)
-
Zhe Zeng (U Virginia)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20251221/4a9fa589/attachment.html>
More information about the Connectionists
mailing list