Connectionists: [SAFE-ML 2025] - Call for Papers: The 1st International Workshop on Secure, Accountable, and Verifiable Machine Learning, co-located with the 18th IEEE International Conference on Software Testing, Verification and Validation (ICST)

Carlo MAZZOCCA cmazzocca at unisa.it
Fri Dec 20 04:22:54 EST 2024


Dear Colleagues,



We invite you to submit your research to The 1st International Workshop on
Secure, Accountable, and Verifiable Machine Learning (SAFE-ML 2025),
 co-located with the 18th IEEE International Conference on Software
Testing, Verification and Validation (ICST 2025), which will be held in
Naples, Italy (Mon 31 March - Fri 4 April 2025).



Machine Learning (ML) models are becoming deeply integrated into our daily
lives, with their use expected to expand even further in the coming years.
However, as these models grow in importance, potential vulnerabilities —
such as biased decision-making and privacy breaches — could result in
serious unintended consequences.



The 1st International Workshop on Secure, Accountable, and Verifiable
Machine Learning (SAFE-ML 2025) aims to bring together experts from
industry and academia, with software testing and ML backgrounds, to discuss
and address these challenges. The focus will be on innovative methods and
tools to ensure correctness, robustness, security, fairness of ML models
and in decentralized learning schemes.



Topics of the workshop will cover, but are not limited to:


•              Privacy preservation of ML models;

•              Adversarial robustness in ML models;

•              Security of ML models against poisoning attacks;

•              Ensuring fairness and mitigating bias in ML models;

•              Unlearning algorithms in ML;

•              Unlearning algorithms in decentralized learning schemes,
such as Federated Learning (FL), gossip learning and split learning;

•              Secure aggregation in FL;

•              Robustness of FL models against malicious clients or model
inversion attacks;

•              Fault tolerance and resilience to client dropouts in FL;

•              Secure model updates in FL;

•              Proof of client participation in FL,

•              Explainability and interpretability of ML algorithms;

•              ML accountability.



Important Dates

*Paper Submission*: 3rd January AoE, 2025 (*Extension *10th January AoE,
2025)

*Decision Notification*: 3rd of February, 2025



For further details and submission instructions, please visit the workshop
website: https://conf.researchr.org/home/icst-2025/safe-ml-2025



Best regards,

Carlo Mazzocca, Alessio Mora, Rebecca Montanari, Stefano Russo, Selcuk
Uluagac

SAFE-ML Chairs
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20241220/ff9788c7/attachment.html>


More information about the Connectionists mailing list