Connectionists: [CFP] A Human-Centric Perspective of Explainability, Interpretability and Resilience in Computer Vision
EMANUEL DI NARDO
emanuel.dinardo at collaboratore.uniparthenope.it
Fri Dec 15 09:12:52 EST 2023
[Apologies if you receive multiple copies of this CFP]
Call for papers for special session "A Human-Centric Perspective of Explainability, Interpretability and Resilience in Computer Vision"
Special session is included in IJCNN at the IEEE WCCI 2024 in Yokohama
Scope and topics
Computer vision has always been an extremely popular topic in the field of Artificial Intelligence. It is of fundamental importance in many high-impact application areas, such as in security with Visual Object Tracking, in the medical field through the possibility of making predictions of diseases and attempting to locate through object detection and segmentation possible diseased areas in images. It is perfectly applicable to any kind of visual input even from different domains such as using data from ECG or EEG that can easily be transformed from one-dimensional signals into two or tri-dimensional signals and represent them with high semantic meaning as images. Moreover, with the latest generative techniques, artificial data can be made with extraordinary fidelity.
The techniques produced in this area are able to work on both static images and sequences of images, which can have both temporal and volumetric expansion. Because of all these applications, some of them extremely sensitive and high-impact for humans, it has become necessary in recent years to begin to understand why a neural model working on images chooses one response over another. This is possible through explainable Artificial Intelligence techniques. As time goes on, however, it becomes increasingly important not only to explain why an artificial neural network makes choices, but also and especially to provide architectures that can explicitly or implicitly provide a set of explanations/rules, why it was possible to interpret a certain output with given input and verify the mechanisms that are activated within it, even making them predictable. Finally, it is becoming more and more appropriate to put side by side with this type of analysis, how well a model is able to "defend" and "adapt" itself from elements in the external world. that attempt to confuse the model and try to steer it down the wrong path.
Thus, the purpose of this special session is to revise computer vision models by changing the perspective of looking at these methodologies, making them no longer only data- and performance-driven, which is the point on which mostly new algorithms are created, but to be able to make them become human-centric, that is, through processes of explainability, interpretability, and resilience to go about unwinding the skein of uncertainty that hovers over deep learning and, especially, in the field of computer vision by allowing artificial intelligence to be reliable from a human perspective.
The topics of interest for this special session include (but are not limited to):
* Explainability and interpretability of deep neural networks
* Resilient models in computer vision
* Robustness and adaptation to adversarial inputs
* Visualization techniques for explainability and interpretability
* Meta-explanation, generative description of outputs
* Specific explainability and effects of visual attributes in images
* Datasets reliability
* Multisource imaging interpretation
Paper submission deadline is January 15, 2024
To submit a paper use edas platform (https://edas.info/newPaper.php?c=31628&track=121739) and specify, in the topics section, that your paper is for the "A Human-Centric Perspective of Explainability, Interpretability and Resilience in Computer Vision”.
Paper submission guidelines can be found on WCCI website (https://2024.ieeewcci.org/)
All papers accepted and presented at WCCI2024 will be included in the conference proceedings published by IEEE Explore.
All details can be found here (https://sites.google.com/view/heiro-ijcnn-call-for-paper/home)
Organizers
Emanuel Di Nardo, PhD, University of Naples Parthenope, Italy
Prof. Ihsan Ullah, University of Galway, Ireland
Prof. Angelo Ciaramella, University of Naples Parthenope, Italy
------
Emanuel Di Nardo, PhD
University of Naples Parthenope
Naples, Italy
emanuel.dinardo at uniparthenope.it
https://research.emanueldinardo.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20231215/b3557d04/attachment.html>
More information about the Connectionists
mailing list