Connectionists: 1st CFP IJCNN 25: Special Session on Explainable AI in Neural Networks: Advances, Challenges, and Applications

Alfredo Vellido avellido at cs.upc.edu
Mon Dec 16 10:54:16 EST 2024


* apologies for cross-posting*

Special Session on**Explainable AI in Neural Networks: Advances, Challenges, and Applications**

**IJCNN 2025**: International Joint Conference on Artificial Neural Networks
**June 30 to July 5 2025**

https://2025.ijcnn.org/

***************************************************
The rise of AI in critical fields such as healthcare, autonomous systems, and finance demands greater transparency in decision-making processes. Neural networks, despite their high performance, remain opaque, creating a barrier to trust and accountability. The European Union’s AI Act and similar regulatory frameworks highlight the need for AI systems to be both explainable and fair. This session addresses the dual challenge of enhancing transparency in neural networks while meeting the increasing regulatory and ethical requirements for explainable AI (XAI).

The session aims to explore the latest XAI methods that make neural networks more interpretable without sacrificing performance. It will focus on the development of new techniques that meet normative standards, with a particular emphasis on bias mitigation, fairness, and societal impacts. Through an interdisciplinary approach, the session seeks to advance both the technical and regulatory aspects of XAI for neural networks, fostering collaboration between academia, industry, and policymakers.

**Technical Highlights**:
Neuro-Symbolic Alignment: Exploring methods that combine symbolic reasoning with neural networks to enhance interpretability, allowing models to perform reasoning tasks while remaining transparent.
Monosemantic Decomposition: Based on research from Anthropic, this technique focuses on breaking down neural networks into discrete, interpretable features, making it easier to understand model behaviors.
Bias Mitigation and Fairness: Addressing how XAI techniques can identify and mitigate bias in neural networks, ensuring fairness in decision-making, particularly in compliance with frameworks like the AI Act.
XAI Metrics and Evaluation: Developing standardized evaluation metrics to assess the effectiveness of XAI methods, with a particular focus on how these metrics apply to neural networks.
Societal and Ethical Implications: Discussing how XAI in neural networks contributes to broader societal impacts, including accountability, user trust, and the reduction of harmful biases.

**IMPORTANT DATES**
Deadline for submissions: 15 January 2025
Notification of decisions: 31 March 2025

We are looking forward to seeing you in Rome!
**Organizing Committee**:
Alan Perotti, CentAI Institute, Italy
Qi Chen, Victoria University of Wellington, New Zealand
Amanda Horzyk, University of Edinburgh, U.K.
Paulo JG Lisboa, Liverpool John Moores University, U.K.
Asim Roy, Arizona State University, U.S.A.
Alfredo Vellido, Universitat Politècnica de Catalunya, Spain
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20241216/06f45a76/attachment.html>


More information about the Connectionists mailing list