Connectionists: ESANN 2021, 1st Special Session CFP: Interpretable Models in ML and Explainable AI
Alfredo Vellido
avellido at cs.upc.edu
Thu Mar 18 07:33:17 EDT 2021
*** apologies for cross-posting ***
ESANN 2021
The 29th European Symposium on Artificial Neural Networks, Computational
Intelligence and Machine Learning.
Bruges, Belgium: 6-8 October 2021. https://www.esann.org
CFP Special Session: INTERPRETABLE MODELS in MACHINE LEARNING and
EXPLAINABLE ARTIFICIAL INTELLIGENCE
============================================================================================
Machine learning models are currently dominated by neural networks and,
in particular, by deep variants of those networks. Frequently, these
models achieve promising results. However, usually deep networks act as
black-box, as do many other machine learning models. Further, due to
powerful tools, the learning process, which is often a gradient descent
approach, is hidden for the developer as well as for the applicant of
the model. Therefore, the possibilities to assess the network are mainly
performance evaluations. However, this is problematic for many
application for example in medicine, engineering and economy/finance
applications, which require a transparent decision or prediction process.
Recently there has been considerable effort to develop interpretable
models in machine learning and approaches to explain the
decision/prediction processes to the user.
The aim of this special session is to make these new approaches and
models more highly visible to the community. We invite papers
highlighting different aspects of interpretable models and explaining
decision support processes and inference models involving artificial
intelligence. The session covers a broad range of this topic, ranging
from theoretical considerations and new machine learning models to
machine learning applications requiring or benefitting from
interpretability and explainability.
Topics include, but are not limited to:
Machine learning models with inherent interpretability
Methods to explain existing models
Model verification
Visualization and visual inspection of the operation of machine learning
models
Confidence and trustworthiness in AI
Prediction confidence and quantification of uncertainty
Trade-off between interpretability and performance
Model transparency in safety critical applications
We welcome both, new theoretical developments as well as practical
applications.
===========
ORGANIZERS:
Sascha Saralajew (Bosch Center for Artificial Intelligence, Germany),
Alfredo Vellido (Universitat Politècnica de Catalunya - UPC
BarcelonaTech, España),
Thomas Villmann (University of Applied Sciences Mittweida, Saxony
Institute for Computational Intelligence and Machine Learning,
Deutschland),
Paulo Lisboa (Liverpool John Moores University, United Kingdom)
DATES:
Paper submission: 10/05/21
Decisions: 20/07/21
SUBMISSION:
https://www.esann.org/node/6
More information about the Connectionists
mailing list