Connectionists: CALL FOR PAPERS: TRUST-AI – THE EUROPEAN WORKSHOP ON TRUSTWORTHY AI

Eleni Tsalapati etsalapati at gmail.com
Wed Mar 19 10:31:47 EDT 2025


*CALL FOR PAPERS: TRUST-AI – THE EUROPEAN WORKSHOP ON TRUSTWORTHY AI*

===

Researchers and practitioners on trustworthy AI are invited to submit
papers to TRUST-AI, the European Workshop on Trustworthy AI organizes as
part of ECAI 2025, Bologna, Italy.



The aim of TRUST-AI is to serve as a meeting place to cover the need for
exchange and discussion on trustworthy AI. While frameworks for trustworthy
AI are rapidly evolving, and trustworthiness requirements for AI are
increasingly detailed in guidelines and legislation, it is necessary to
establish common understanding and insight into how to realize trustworthy
AI in specific business or sociotechnical contexts. This is particularly so
following the prominence of trustworthy AI in the AI Act under current
implementation in Europe. In response, we invite researchers and
practitioners to a European workshop on this topic of substantial academic
and practical interest.



In the workshop, we will address trustworthy AI from a human-centred
perspective over the lifetime of the AI system. We take a starting point in
trustworthy AI grounded in an AI risk management approach, as recommended
in frameworks such as those by NIST and ENISA and promoted in legislation
such as the European AI Act. Furthermore, we consider trustworthy AI as
spanning the technical attributes of the AI system, its impact from a
business perspective, and its compliance with ethical and legal
requirements, its philosophical ethics foundation, and a range of
methodological and technical approaches are needed to adequately assess and
optimize the trustworthiness of an AI system.



*LOCATION:* Bologna, Italy – as part of ECAI 2025

*DATE:* October 25 or 26 (to be decided by ECAI 2025 program committee)

*WORKSHOP FORMAT:* On-site attendance

*SUBMISSION DEADLINE:* July 25

*WEBSITE:* https://sites.google.com/view/trust-ai/



*SUBMISSION CATEGORIES*

Participants are invited to submit position or short papers to be presented
at the workshop. Papers should address one of the workshop topic areas.



*POSITION PAPERS (2-4 pages):* Papers presenting a specific position or
open questions in need of reflection or discussion. Could also include case
experiences or planned research. Length should be no more than 5.000-10.000
characters.



*SHORT PAPERS (5-9 pages):* Papers presenting theoretical contributions,
case experiences, or findings from empirical studies. Could also include
work in progress. Length should be 12.500-22.500 characters.



*PROCESSING OF SUBMISSIONS*

Submissions will be reviewed by three independent reviewers. The review
process is single blind, meaning that author information is included in the
submissions.



Accepted position papers are published at the workshop website. Accepted
short papers are published in a CEUR Workshop Proceedings (
https://ceur-ws.org/).



Through the processing of the submissions, specific themes in need of
reflection and discussion at the workshop will be identified.



*DATES OF NOTE*

- *July 25:* Submission deadline

- *August 25:*  Author notification

- *September 25:*  Final version of papers

- *October 25 or 26:* Workshop



*KEY TOPICS OF INTEREST*

TRUST-AI aims for needed reflection and discussion of trustworthy AI as it
is manifested in research projects and case studies in. Specifically, we
encourage participants to contribute on the following topic areas:



- *Assessment of trustworthy AI:* Methods, tools, and best practices for
trustworthiness assessment.

- Human-centered trustworthy AI: How to incorporate user perspectives and
values in assessment and optimization of trustworthy AI. How to manage
conflicting priorities between different stakeholders.

- *Risk Management for Trustworthy AI:* Frameworks for identifying,
assessing, and mitigating risks associated with AI trustworthiness.

- *Technological Advancements for Trustworthy AI:* Emerging technologies to
support the development, deployment, and verification of trustworthy AI.

- *The Ethical Basis of Trustworthy AI:* Reflections on the ethical
foundations of trustworthy AI, its principles and values.

- *Trustworthy AI and the AI Act:* Legal requirements for Trustworthy AI
and means to assess and support compliance.

- *Navigating Ethical and Legal Requirements for Trustworthy AI:* How AI
can satisfy legal and ethical requirements. Distinctions and similarities
between trustworthy AI as an ethical concept and AI as compliant with legal
requirements.

- *Trustworthiness Optimization throughout the AI Lifecycle:* How to ensure
and maintain trustworthiness throughout AI development and deployment.

- *Steps Towards Certification Programmes for Trustworthy AI:* Testing
methods and proposals for compliance requirements



*ORGANIZERS*

The organising committee of TRUST-AI includes:



- Asbjørn Følstad, SINTEF, Norway

- Dimitris Apostolou, ICCS & University of Pireus, Greece

- Steve Taylor, University of Southampton, UK

- Andrea Palumbo, KU Leuven, Belgium

- Eleni Tsalapati, ATC, Greece

- Giannis Stamatellos, Institute of Philosophy & Technology, Greece

- Arnaud Billion, IBM France Lab, France

- Rosario Catelli, Engineering, Italy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20250319/48ee7576/attachment.html>


More information about the Connectionists mailing list