Connectionists: DEADLINE EXTENSION [CFP] Special Session IJCNN 2023 - The Coming of Age of Explainable AI (XAI) and ML

Alfredo Vellido avellido at cs.upc.edu
Wed Feb 1 11:50:55 EST 2023


Apologies for cross-posting
===================
LAST CALL FOR PAPERS: DEADLINE EXTENDED:*7th of February*
===================

*IEEE IJCNN 2023 Special Session on**The Coming of Age of Explainable AI 
(XAI) and Machine Learning**
*
June 18-23, 2023, Queensland, Australia
www.cs.upc.edu/~avellido/research/conferences/IJCNN2023-XAIcomingofage.html
https://2023.ijcnn.org/authors/paper-submission

Aims & Scope
------------
Much of current research on Machine Learning (ML) is dominated by 
methods of the Deep Learning family. The more complex their 
architectures, the more difficult the interpretation or explanation of 
how and why a particular network prediction is obtained, or the 
elucidation of which components of the complex system contributed 
essentially to the obtained decision. This brings about the concern 
about interpretability and non-transparency of complex models, 
especially in high-stakes applications areas such as healthcare, 
national security, industry or public governance, to name a few, in 
which decision making processes may affect citizens. This is, for 
instance, made especially relevant by rapid developments in the field of 
autonomous systems – from cars that drive themselves to partner robots 
and robotic drones. DARPA (Defense Advanced Research Projects Agency), a 
research agency of US Department of Defense, was the first to start a 
research program on Explainable AI 
(https://www.darpa.mil/program/explainable-artificial-intelligence) with 
the goal “to create a suite of machine learning techniques that (1) 
Produce more explainable models, while maintaining a high level of 
learning performance (prediction accuracy); and (2) Enable human users 
to understand, appropriately trust, and effectively manage the emerging 
generation of artificially intelligent partners.” Research on 
Explainable AI (XAI) is now supported worldwide by a variety of public 
institutions and legal regulations, such as European Union’s General 
Data Protection Regulation (GDPR) and the forthcoming Artificial 
Intelligence Act. Similar concerns about transparency and 
interpretability are being raised by governments and organizations 
worldwide. The lack of transparency (interpretability and 
explainability) of many ML approaches in the light of regulations may 
end up limiting ML to niche applications and poses a significant risk of 
costly mistakes without the mitigation of a sound understanding about 
the flow of information in the model.
For this special session, we invite papers that address many of the 
challenges of XAI in the context of ML models and algorithms. We are 
interested in papers on efficient and innovative algorithmic approaches 
to XAI and their actual applications all areas. This
session also aims to explore the performance-versus-explanation 
trade-off space for high-stakes applications of ML in light of all types 
of AI regulation.  Comprehensive survey papers on existing technologies 
for XAI are also welcome.
We aim to bring together researchers from different fields to discuss 
key issues related to the research and applications of XAI methods and 
to share their experiences of solving problems in high-stakes 
applications in all domains.
Topics that are of interest to this session include but are not limited to:

  New neural network architectures and algorithms for XAI
  Interpretability by design
  Rule extraction algorithms for deep neural networks
  Augmentations of AI methods to increase interpretability and transparency
  Innovative applications of XAI
  Verification of AI performance
  Regulation-compliant XAI methods
  Explanation-generation methods for high-stakes applications
  Stakeholder-specific XAI methods for high-stakes applications
  XAI methods auditing in specific domains
  Human-in-the-loop ML: bridging the gap between data scientists and 
end-users
  XAI through Data Visualization
  Interpretable ML pipelines
  Query Interfaces for DL
  Active and Transfer learning with transparency
  Relevance and Metric Learning
  Deep Neural Reasoning
  Interfaces with Rule-Based Reasoning, Fuzzy Logic and Natural Language 
Processing

Important Dates
--------------------------
Paper submission:*February 7, 2023 *
Paper decision notification: March 31, 2023

Session Chairs
------------------------
Qi Chen, Victoria University of Wellington, New Zealand
José M Juárez, Universidad de Murcia, Spain
Paulo Lisboa, Liverpool John Moores University, U.K.
Asim Roy, Arizona State University, U.S.A.
Alfredo Vellido, Universitat Politècnica de Catalunya, Spain
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20230201/c1dae60a/attachment-0001.html>


More information about the Connectionists mailing list