Connectionists: 1st CFP, IEEE IJCNN 2021 special session on Transparent and Explainable Artificial Intelligence (XAI) for Health

Alfredo Vellido avellido at cs.upc.edu
Tue Dec 29 08:20:21 EST 2020


Apologies for cross-posting
===================
1st CALL FOR PAPERS
===================

IEEE IJCNN 2021 Special Session on
Transparent and Explainable Artificial Intelligence (XAI) for Health

July 18-22, 2021, Virtual Event.
https://www.cs.upc.edu/~avellido/research/conferences/IJCNN2021-ssTranspXAI4Health.html

Aims & Scope
------------
 From the widespread implementation and use of electronic health records 
to basic research in pharma, and from the popularization of health 
wearables to the digitalization of procedures at the point of care, the 
domains of medicine and healthcare are bringing data to the fore of 
their practice. The abundance of data in turn calls for methods that 
allow transforming such raw information into novel knowledge that is 
truly usable, including high stakes decision support.
Machine Learning (ML) is enjoying unprecedented attention in healthcare 
and medicine, riding the current wave of popularity of deep learning 
(DL) and the umbrella concept of Big Data. But such attention may bear 
little fruit unless data scientists effectively address one major 
limitation that is particularly sensitive in the medical domain: the 
lack of interpretability of many ML approaches and, particularly, DL 
methods, leading in turn to limited explainability. This may limit ML to 
niche applications and poses a significant risk of costly mistakes 
without the mitigation of a sound understanding of the flow of 
information in the model.
Domains where decision-making impacts our health motivate this special 
session, to which we invite current research on eXplainable Artificial 
Intelligence (XAI). The goal of XAI is the design of techniques and 
approaches that still retain model performance, while being able to 
explain their outputs in human-understandable terms. With these 
capabilities, clinical practitioners will be able to integrate the 
models into their own reasoning, gaining insights about the data and 
checking compatibility with working guidelines at the point-of-care.
This session aims to explore such performance-versus-explanation 
trade-off space for medical and healthcare applications of ML. We aim to 
bring together researchers from different fields to discuss key issues 
related to the research and applications of XAI methods and to share 
their experiences of solving problems in medicine and healthcare. 
Applications leading towards routine clinical practice are particularly 
welcome.

Topics that are of interest to this session include but are not limited to:

  * Interpretable ML Models in medicine and healthcare: theoretical and
    practical development
  * XAI for electronic health records
  * Integration of XAI in medical devices
  * Human-in-the-loop ML: bridging the gap between data and medical experts
  * Interpretability through Data Visualization
  * Interpretable ML pipelines in medicine and healthcare
  * Query Interfaces for DL
  * Active and Transfer learning
  * Relevance and Metric Learning
  * Deep Neural Reasoning
  * Interfaces with Rule-Based Reasoning, Fuzzy Logic and Natural
    Language Processing
  * Assessment of bias and discrimination in databased models


Important Dates
---------------
Paper submission: January 15, 2021
Paper decision notification: March 15, 2021

Session Chairs
--------------
Alfredo Vellido (Universitat Politècnica de Catalunya, Spain)
Paulo J.G. Lisboa (Liverpool John Moores University, U.K.)
José D. Martín (Universitat de València, Spain)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20201229/4c9dfd00/attachment.html>


More information about the Connectionists mailing list