Connectionists: DEADLINE EXTENSION: IEEE WCCI/IJCNN special session on Explainable Computational/Artificial Intelligence

Alfredo Vellido avellido at cs.upc.edu
Sun Jan 12 11:59:10 EST 2020


Apologies for cross-posting
===================
NEW DEADLINE: January 31st
===================

IEEE WCCI/IJCNN 2020 Special Session on
EXPLAINABLE COMPUTATIONAL/ARTIFICIAL INTELLIGENCE

July 19-24, 2020, Glasgow, Scotland, UK.

www.cs.upc.edu/~avellido/research/conferences/IJCNN2020-ssExplainableML.html

Aims & Scope
------------
The spectacular successes in machine learning (ML) have led to a 
plethora of Artificial Intelligence (AI) applications. However, the 
large majority of these successful models, like deep neural networks, 
support vector machines, etc. are black boxes, opaque, non-intuitive and 
difficult for people to understand. There are critical domains that 
demand more intelligent, autonomous, and symbiotic systems, like 
medicine, security, legal, the military, finance and transportation, to 
mention a few, for which performance is not the only quality indicator. 
These are areas where decision-making faces high risks due to the 
involvement of human lives, critical infrastructure, very costly 
operations, national threats, etc. In situations like these, decision 
makers need much more that numeric performance in favor of alternative 
solutions that provide rationale and are more knowledge-based.

The goal of Explainable AI (XAI) is to create a suite of ML techniques 
that i) result in more explainable models, while maintaining a high 
level of learning performance, but also ii) enable human users to 
develop understanding to be able to trust the model, and effectively 
manage a new generation of artificially intelligent machine tools. 
Continued advances promise to produce autonomous systems that will 
perceive, learn, decide, and act on their own. However, the 
effectiveness of these systems is limited by the machines' current 
inability to explain their decisions and actions to human users.

This session will explore the performance-versus-explanation trade-off 
space. This will include ML models that are interpretable by design. 
Some, like fuzzy systems and rule induction, have general function 
approximation properties. Very important are also algorithms producing 
models in mathematic languages such as algebraic functions and 
differential equations, piecewise non-linear models, etc. Despite of the 
differences in the approaches, there are common elements and basic 
methodologies that are present in many applications.  We will bring 
together researchers from different fields to discuss key issues related 
to the research and applications of XAI methods and to share their 
experiences of solving common problems.

Topics that are of interest to this session include but are not limited to:-
•    Interpretable ML Models
•    Query Interfaces for Deep Learning
•    Interactive User Interfaces
•    Active and Transfer learning
•    Relevance and Metric Learning
•    Practical Applications of Interpretable Machine Learning
•    Deep Neural Reasoning

NEW Dates
---------------
Paper submission: January 31, 2020
Paper decision notification: March 15, 2020

Session Chairs
--------------
Julio J. Valdés (National Research Council Canada)
Paulo J.G. Lisboa, Sandra Ortega-Martorell, Iván Olier (Liverpool John 
Moores University, U.K.)
Alfredo Vellido (Universitat Politècnica de Catalunya, Spain)

Submission: https://ieee-cis.org/conferences/ijcnn2020/upload.php
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20200112/ffc1caa5/attachment.html>


More information about the Connectionists mailing list