As AI adoption soars, so do concerns over data privacy, ethics, and regulatory compliance (e.g., GDPR, AI Act, CCPA, Bill C-27). Machine Unlearning (MU) emerges as a game-changer, enabling selective removal of learned information without costly retraining—mitigating biases, protecting sensitive data, and aligning AI with ethical standards.
WIPE-OUT (as ECML-PKDD Workshop) brings together pioneers in MU to push boundaries in privacy-preserving AI. From cutting-edge algorithms to real-world applications, we foster collaboration to tackle the legal, technical, and ethical challenges of unlearning. Join us in redefining the AI landscape! 🚀
All deadlines are 11:59 pm AoE.
WIPE-OUT Workshop welcomes research and perspective contributions on all topics related to Machine Unlearning across domains (e.g., finance, business, basic sciences, construction computational advertising, medical, etc.) and independent of data types (e.g., networks, tabular, unstructured, graphs, logs, spatiotemporal, multimedia, time series, genomic sequences, streaming data, etc. ). Contributions can also include research or perspectives regarding the following:
![]() |
Speaker's Bio:Gintarė is a senior research scientist at Google DeepMind, based in Toronto, an adjunct professor in the McGill University School of Computer Science, and an associate industry member of Mila, the Quebec AI Institute. Prior to joining Google, Gintarė led the Trustworthy AI program at Element AI / ServiceNow, and obtained her Ph.D. in machine learning from the University of Cambridge, under the supervision of Zoubin Ghahramani. Gintarė was recognized as a Rising Stars in Machine Learning by the University of Maryland program in 2019. Dziugaite is known for her work on network and data sparsity, developing algorithms and uncovering effects on generalization and other metrics. Dziugaite coined the term “linear mode connectivity” and carried out the first in depth study connecting it to the existence of lottery tickets, loss landscapes and the mechanism of iterative magnitude pruning. Another major focus of her research is on understanding generalization in deep learning, and more generally the development of information-theoretic methods for studying generalization. Her most recent work looks at removing the influence of data on the model (unlearning). |
Abstract: Continual learning, model merging, and information unlearning address critical challenges in managing foundation models. In this talk I will present evidence that linear mode connectivity (LMC)—the lack of loss barriers on the linear path connecting two models—is a decisive factor in the success and failure of methods in these three areas. We demonstrate how LMC enables efficient, training-free model merging and continual learning through weight interpolation. Conversely, in unlearning, we show that model editing fails when the edited model is linearly connected to the original model. Attempts to "unlearn" information are often superficial, as the original knowledge remains easily recoverable along the same low-loss path, compromising the model's robustness and leaving it susceptible to relearning attacks. This work highlights a core trade-off in modern AI: the very properties that make models continually learn also make them difficult to truly and safely edit. |
![]() |
Speaker's Bio:Eleni is a senior Research Scientist at Google DeepMind. Her research revolves around designing methods for efficient adaptation of deep neural networks to cope with distribution shifts, rapidly learn new tasks, and unlearn targeted knowledge or the effect of specific training data. She obtained her PhD from the University of Toronto with Professors Rich Zemel and Raquel Urtasun, where she worked on few-shot learning and meta-learning. She was the recipient of the NSERC Alexander Graham Bell Canada Graduate Scholarship- Doctoral award. Over the past few years, she has focused on machine unlearning, notably being the lead organizer of the first NeurIPS machine unlearning competition, held at NeurIPS 2023. |
Abstract: Despite a plethora of recently proposed unlearning algorithms, we still lack a comprehensive understanding of their behaviours and failure modes, in part because rigorous and comprehensive evaluation is challenging and expensive. In this talk, we will present insights derived through empirical investigations in small models designed to answer fundamental questions on interpretable factors influencing the behaviours of unlearning algorithms. Are all examples equally easy or hard to unlearn? Does that notion of difficulty transfer across algorithms? How well can different unlearning algorithms unlearn a concept when only a subset of it is identified? What criteria govern whether (believed-to-be) unlearned information may resurface through further finetuning? Which algorithms suffer more from these attacks and why? What are the resulting trade-offs? For each of the above questions, we will discuss how insights obtained from empirical analyses leads to improved unlearning algorithms. |
The Workshops post-proceedings will be managed by the Proceedings Chairs of the Conference (ecml-pkdd-2025-proceedings-chairs@googlegroups.com) and will be included in a joint Post-Workshop proceeding published by Springer Communications in Computer and Information Science, in 1-2 volumes, organized by focused scope. Papers authors will have the faculty opt-in or opt-out. We suggest that workshop papers be prepared and submitted in the LNCS format (https://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines).
Please be aware that at least one author of all accepted papers should register to the conference until the early registration deadline. The required minimum registration per accepted paper is one full 5-day ticket, standard or student. More info at ECML-PKDD Registration Page.
We expect the authors, the program committee, and the organizing committee to adhere to the ECML-PKDD Code of Conduct.
The Main Conference organization team will manage the registration: https://ecmlpkdd.org/2025/
For general inquiries about the workshop, please email andrea.dangelo6@graduate.univaq.it, claudio.savelli@polito.it, flavio.giobergia@polito.it, gullof@acm.org, and giovanni.stilo@univaq.it.