<div dir="ltr"><p style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)">Dear Colleagues,</p><p aria-hidden="true" style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)"> </p><p style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)">We invite you to submit your research to The 1st International Workshop on Secure, Accountable, and Verifiable Machine Learning (SAFE-ML 2025),  co-located with the 18th IEEE International Conference on Software Testing, Verification and Validation (ICST 2025), which will be held in Naples, Italy (Mon 31 March - Fri 4 April 2025).</p><p aria-hidden="true" style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)"> </p><p style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)">Machine Learning (ML) models are becoming deeply integrated into our daily lives, with their use expected to expand even further in the coming years. However, as these models grow in importance, potential vulnerabilities — such as biased decision-making and privacy breaches — could result in serious unintended consequences.</p><p aria-hidden="true" style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)"> </p><p style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)">The 1st International Workshop on Secure, Accountable, and Verifiable Machine Learning (SAFE-ML 2025) aims to bring together experts from industry and academia, with software testing and ML backgrounds, to discuss and address these challenges. The focus will be on innovative methods and tools to ensure correctness, robustness, security, fairness of ML models and in decentralized learning schemes.</p><p aria-hidden="true" style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)"> </p><p style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)">Topics of the workshop will cover, but are not limited to:</p><p style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)"><br></p><p style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)">•              Privacy preservation of ML models;</p><p style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)">•              Adversarial robustness in ML models;</p><p style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)">•              Security of ML models against poisoning attacks;</p><p style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)">•              Ensuring fairness and mitigating bias in ML models;</p><p style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)">•              Unlearning algorithms in ML;</p><p style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)">•              Unlearning algorithms in decentralized learning schemes, such as Federated Learning (FL), gossip learning and split learning;</p><p style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)">•              Secure aggregation in FL;</p><p style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)">•              Robustness of FL models against malicious clients or model inversion attacks;</p><p style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)">•              Fault tolerance and resilience to client dropouts in FL;</p><p style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)">•              Secure model updates in FL;</p><p style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)">•              Proof of client participation in FL,</p><p style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)">•              Explainability and interpretability of ML algorithms;</p><p style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)">•              ML accountability.    </p><p aria-hidden="true" style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)"> </p><p style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)">Important Dates</p><p style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)"><b>Paper Submission</b>: 3rd January AoE, 2025 (<b>Extension </b>10th January AoE, 2025)</p><p style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)"><b>Decision Notification</b>: 3rd of February, 2025    </p><p aria-hidden="true" style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)"> </p><p style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)">For further details and submission instructions, please visit the workshop website: <a href="https://conf.researchr.org/home/icst-2025/safe-ml-2025" title="https://conf.researchr.org/home/icst-2025/safe-ml-2025" id="m_-7320057530946752710gmail-LPlnk260934" target="_blank" style="color:rgb(5,99,193);border:0px;font:inherit;margin:0px;padding:0px;vertical-align:baseline">https://conf.researchr.org/home/icst-2025/safe-ml-2025</a></p><p style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)">     </p><p style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)">Best regards,</p><p style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)">Carlo Mazzocca, Alessio Mora, Rebecca Montanari, Stefano Russo, Selcuk Uluagac </p><p style="margin:0cm;font-size:11pt;font-family:Calibri,sans-serif;color:rgb(36,36,36)">SAFE-ML Chairs</p></div>