<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div>We apologize if you receive this e-mail several times</div><div>***********************Call for papers*****************************************************</div><div>3rd Workshop on Explainable and Ethical AI jointly with ICPR’2024 </div><div>https://xaie.sciencesconf.org/</div><div>*******************************************************************************************</div><div>The third edition of WS XAI-E follows two successful editions at ICPR’2020( https://edl-ai-icpr.labri.fr/ ) and ICPR’2022 (https://xaie-icpr.labri.fr/).</div><div>The WS will be held on December 1st 2024 in Kolkata, India jointly with the ICPR’2024 conference https://icpr2024.org/. </div><div><br></div><div>**The topics covered by the workshop are: </div><div><span class="Apple-tab-span" style="white-space:pre"> </span>- Naturally explainable AI methods, </div><div><span class="Apple-tab-span" style="white-space:pre"> </span>- Post-Hoc Explanation methods of Deep Neural Networks, including transformers and Generative AI, </div><div><span class="Apple-tab-span" style="white-space:pre"> </span>- Evaluation metrics for Explanation methods,</div><div><span class="Apple-tab-span" style="white-space:pre"> </span>- Hybrid XAI, </div><div><span class="Apple-tab-span" style="white-space:pre"> </span>- XAI in generative AI,</div><div><span class="Apple-tab-span" style="white-space:pre"> </span>- Visualization of Explanations and user interfaces,</div><div><span class="Apple-tab-span" style="white-space:pre"> </span>- Image-to-text explanations,</div><div><span class="Apple-tab-span" style="white-space:pre"> </span>- Concept-based explanations,</div><div><span class="Apple-tab-span" style="white-space:pre"> </span>- Use of explanation methods for Deep NN models in training and generalization.</div><div><span class="Apple-tab-span" style="white-space:pre"> </span>- Ethical considerations when using pattern recognition models,</div><div><span class="Apple-tab-span" style="white-space:pre"> </span>- Real-World Application of XAI methods </div><div><br></div><div>Methodology in explainability is related to the creation of explanations, their representation, as well as the quantification of their confidence, while those in AI ethics include automated audits, detection of bias in data and models, ability to control AI systems to prevent harm…. and others methods to improve AI explainability in general and trustfulness to AI. </div><div><br></div><div>We are witnessing the emergence of an “AI economy and society” where AI technologies are increasingly impacting many aspects of business as well as of everyday life. We read with great interest about recent advances in AI medical diagnostic systems, self-driving cars, and the ability of AI technology to automate many aspects of business decisions like loan approvals, hiring, policing etc. In the last years, generative AI is emerging as a major topic promising great benefits but also raising well-founded fears of significant disruption to all aspects of society. Its problems like “hallucinations” and bias are also well known. However, as evident by recent experiences, AI systems may produce errors, can exhibit overt or subtle bias, may be sensitive to noise in the data, and often lack technical and judicial transparency and explainability. These shortcomings have been reported in scientific but also and importantly in general press (accidents with self-driving cars, biases in AI-based policing, hiring and loan systems, biases in face recognition, seemingly correct medical diagnoses later found to be made due to wrong reasons etc.). These shortcomings are raising many ethical and policy concerns not only in technological and research communities, but also among policymakers and general public, and will inevitably impede wider adoption of AI in society.</div><div> </div><div>The problems related to Ethical AI are complex and broad. They encompass not only technical issues but also legal, political and ethical ones. One of the key components of Ethical AI systems is explainability or transparency, but other issues like detecting bias, ability to control the outcomes, ability to objectively audit AI systems for ethics are also critical for successful applications and adoption of AI in society. Consequently, explainable and Ethical AI are very urgent and popular topics both in IT as well as in business, legal and philosophy communities. Many workshops in this field are held at top conferences. </div><div>The third workshop on explainable AI at ICPR aims to address methodological aspects of explainable and ethical AI in general, and include related applications and case studies with the aim to address these very important problems from a broad research perspective.</div><div><br></div><div>** Organizing committee: </div><div>Prof. J. Benois-Pineau, University of Bordeaux, jenny.benois-pineau@u-bordeaux.fr</div><div>Dr. R. Bourqui, University of Bordeaux, romain.bourqui@u-bordeaux.fr</div><div>Dr. R. Giot, University of Bordeaux, romain.giot@u-bordeaux.fr</div><div>Prof. D. Petkovic, CS Department, San Francisco State University, petkovic@sfsu.edu</div><div><br></div><div>**Important dates: </div><div><span class="Apple-tab-span" style="white-space:pre"> </span>-July 14, 2024: paper submission </div><div><span class="Apple-tab-span" style="white-space:pre"> </span>-September 20, 2024: Notification to authors</div><div><span class="Apple-tab-span" style="white-space:pre"> </span>-September 27, 2024: Camera ready versions</div><div><br></div><div><br></div><div>The WS papers will be published in the proceedings of ICPR’2024.</div><div> </div><div>Romain Giot, Jenny Benois-Pineau, Romain Bourqui, Dragutin Petkovic </div><div>WS organizers </div><div><br></div><div>
<div style="color: rgb(0, 0, 0); letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">Jenny Benois-Pineau, <br>Professeure en Informatique, <br>Chargée de mission aux relations Internationales<br>Collège Sciences et Technologies, <br>Université de Bordeaux</div><div style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">351, crs de la Libération</div><div style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">33405 Talence</div><div style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">France</div><div style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">tel.: +33 (0) 5 40 00 84 24<br><br>Jenny Benois-Pineau, PhD, HDR, <br>Professor of Computer Science, <br>Chair of International relations<br>School of Sciences and Technologies<br>University of Bordeaux </div><div style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">351, crs de la Libération<br>33405 Talence<br>tel.: +33 (0) 5 40 00 84 24</div></div>
</div>
<br></body></html>