<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;">We invite researchers working on interpretability and explainability in machine learning and artificial intelligence, as well as related topics, to submit regular <span id="DWT742" class="ZmSearchResult">papers</span> (14 pages, single column) or short <span id="DWT744" class="ZmSearchResult">papers</span> (7 pages, single column) to the <span id="DWT746" class="ZmSearchResult">AIMLAI</span>Workshop, to be held in conjunction with ECML/PKDD 2025.</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;"><br></div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;"><b>Website:</b> <span class="Object" role="link" id="OBJ_PREFIX_DWT747_com_zimbra_url"><span class="Object" role="link" id="OBJ_PREFIX_DWT763_com_zimbra_url"><a href="https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fproject.inria.fr%2Faimlai%2F&data=05%7C02%7CM.Khosla%40tudelft.nl%7C6efc896a19ed4754644b08dd946a4359%7C096e524d692940308cd38ab42de0887b%7C0%7C0%7C638829906291057473%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=zJjHwez63uNcOzJOlgqHr1ZAPdFMUo9bClTHE7LxAqc%3D&reserved=0" originalsrc="https://project.inria.fr/aimlai/" target="_blank" rel="nofollow noopener noreferrer" data-mce-href="https://project.inria.fr/aimlai/">https://project.inria.fr/aimlai/</a></span></span><br data-mce-bogus="1"></div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;"><b>Submission link: </b><span class="Object" role="link" id="OBJ_PREFIX_DWT748_com_zimbra_url"><span class="Object" role="link" id="OBJ_PREFIX_DWT764_com_zimbra_url"><a href="https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcmt3.research.microsoft.com%2FECMLPKDDWorkshopTrack2025%2FSubmission%2FIndex&data=05%7C02%7CM.Khosla%40tudelft.nl%7C6efc896a19ed4754644b08dd946a4359%7C096e524d692940308cd38ab42de0887b%7C0%7C0%7C638829906291088232%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=vB4O9j2xdQYMLUoaH%2F9GlRMVe7%2FhQwVYdOPjqBqjcjs%3D&reserved=0" originalsrc="https://cmt3.research.microsoft.com/ECMLPKDDWorkshopTrack2025/Submission/Index" target="_blank" rel="nofollow noopener noreferrer" data-mce-href="https://cmt3.research.microsoft.com/ECMLPKDDWorkshopTrack2025/Submission/Index">https://cmt3.research.microsoft.com/ECMLPKDDWorkshopTrack2025/Submission/Index</a></span></span><br data-mce-bogus="1"></div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;"><b>Submission deadline:</b> June 21, 2025 (extended deadline)</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;"><b>Workshop: </b>September 15, 2025</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;"><br></div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;"><span id="DWT750" class="ZmSearchResult">AIMLAI</span> (Advances in Interpretable Machine Learning and Artificial Intelligence) aims to foster principled research and discussion on building explainable machine learning and AI systems . We invite contributions from researchers in academia and industry that approach the challenges of achieving explainability and interpretability in AI systems from technical, as well as legal, ethical, or sociological perspectives. This year, AIMLAI will feature a <strong>keynote talk</strong> by <strong>Cynthia Rudin</strong>. <br></div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;"><br></div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;"><b>Submissions may include:</b></div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;">• Novel research results</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;">• Application experiences</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;">• Tools and systems</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;">• Preliminary ideas with promising potential</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;"><br></div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;"><b>Topics of Interest (non-exhaustive)</b></div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;">Interpretable Machine Learning</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;">• Interpretable-by-design models</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;">• Explainable recommendation systems</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;">• Multimodal explanations</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;">• Explainability <span id="DWT752" class="ZmSearchResult">for</span> large language models (LLMs)</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;">• LLMs as tools <span id="DWT754" class="ZmSearchResult">for</span> explainability</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;">• Mechanistic interpretability</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;"><br></div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;">Transparency, Ethics, Fairness and Privacy</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;">• Ethical implications of AI/ML</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;">• Legal frameworks and compliance</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;">• Fairness and bias mitigation</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;">• Interplay of explainability and privacy</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;"><br></div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;">Methodology & Evaluation</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;">• Formal measures of interpretability and explainability</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;">• Trade-offs between interpretability and model complexity</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;">• Evaluation frameworks</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;">• User-centered interpretability</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;"> Explanation Modules & Human Integration</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;">• Semantics in explanations</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;">• Human-in-the-loop systems</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;">• Combining ML, information visualization, and human-computer interaction</div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;"><br></div><div style="font-size: 16px; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: arial, helvetica, sans-serif;">We look forward to your contributions and to engaging discussions at <span id="DWT756" class="ZmSearchResult">AIMLAI</span> 2025!</div></body></html>