<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
Apologies for cross-posting<br>
===================<br>
1st CALL FOR PAPERS<br>
===================<br>
<br>
IEEE IJCNN 2021 Special Session on<br>
Transparent and Explainable Artificial Intelligence (XAI) for Health<br>
<br>
July 18-22, 2021, Virtual Event.<br>
<a class="moz-txt-link-freetext" href="https://www.cs.upc.edu/~avellido/research/conferences/IJCNN2021-ssTranspXAI4Health.html">https://www.cs.upc.edu/~avellido/research/conferences/IJCNN2021-ssTranspXAI4Health.html</a><br>
<br>
Aims & Scope<br>
------------<br>
From the widespread implementation and use of electronic health
records to basic research in pharma, and from the popularization of
health wearables to the digitalization of procedures at the point of
care, the domains of medicine and healthcare are bringing data to
the fore of their practice. The abundance of data in turn calls for
methods that allow transforming such raw information into novel
knowledge that is truly usable, including high stakes decision
support.<br>
Machine Learning (ML) is enjoying unprecedented attention in
healthcare and medicine, riding the current wave of popularity of
deep learning (DL) and the umbrella concept of Big Data. But such
attention may bear little fruit unless data scientists effectively
address one major limitation that is particularly sensitive in the
medical domain: the lack of interpretability of many ML approaches
and, particularly, DL methods, leading in turn to limited
explainability. This may limit ML to niche applications and poses a
significant risk of costly mistakes without the mitigation of a
sound understanding of the flow of information in the model.<br>
Domains where decision-making impacts our health motivate this
special session, to which we invite current research on eXplainable
Artificial Intelligence (XAI). The goal of XAI is the design of
techniques and approaches that still retain model performance, while
being able to explain their outputs in human-understandable terms.
With these capabilities, clinical practitioners will be able to
integrate the models into their own reasoning, gaining insights
about the data and checking compatibility with working guidelines at
the point-of-care.<br>
This session aims to explore such performance-versus-explanation
trade-off space for medical and healthcare applications of ML. We
aim to bring together researchers from different fields to discuss
key issues related to the research and applications of XAI methods
and to share their experiences of solving problems in medicine and
healthcare. Applications leading towards routine clinical practice
are particularly welcome.<br>
<p>Topics that are of interest to this session include but are not
limited to:</p>
<ul>
<li>Interpretable ML Models in medicine and healthcare:
theoretical and practical development</li>
<li>XAI for electronic health records<br>
</li>
<li>Integration of XAI in medical devices</li>
<li>Human-in-the-loop ML: bridging the gap between data and
medical experts</li>
<li>Interpretability through Data Visualization</li>
<li>Interpretable ML pipelines in medicine and healthcare</li>
<li>Query Interfaces for DL</li>
<li>Active and Transfer learning</li>
<li>Relevance and Metric Learning</li>
<li>Deep Neural Reasoning</li>
<li>Interfaces with Rule-Based Reasoning, Fuzzy Logic and Natural
Language Processing</li>
<li>Assessment of bias and discrimination in databased models</li>
</ul>
<br>
Important Dates<br>
---------------<br>
Paper submission: January 15, 2021<br>
Paper decision notification: March 15, 2021<br>
<br>
Session Chairs<br>
--------------<br>
Alfredo Vellido (Universitat Politècnica de Catalunya, Spain)<br>
Paulo J.G. Lisboa (Liverpool John Moores University, U.K.)<br>
José D. Martín (Universitat de València, Spain)<br>
</body>
</html>