WIPE-OUT 2026 (Workshop on Machine Unlearning and Privacy Preservation) is an academic workshop dedicated to machine unlearning — the process of selectively removing the influence of specific training data from machine learning models without retraining from scratch. The workshop is co-located with ECML-PKDD 2026, one of Europe’s leading machine learning conferences, and will be held on September 7, 2026 in Naples, Italy.
Direct New Submission Link: WIPE-OUT@CMT - Deadline June 5, 2026
As AI adoption soars, so do concerns over data privacy, ethics, and regulatory compliance (e.g., GDPR, AI Act, CCPA, Bill C-27). Machine unlearning emerges as a critical capability, enabling selective removal of learned information without costly retraining — mitigating biases, protecting sensitive data, and aligning AI with ethical standards. Unlike differential privacy, which limits what a model learns during training, machine unlearning addresses the post-hoc removal of data influence, making it essential for enforcing the right to be forgotten in deployed AI systems.
Building on the success of the first edition at ECML-PKDD 2025 in Porto (~50 participants, keynotes by Google DeepMind researchers), this second edition explores cutting-edge methodologies, theoretical advancements, and practical applications of machine unlearning and related topics, including privacy preservation, that align with evolving regulatory standards and societal expectations. We especially encourage submissions from researchers working on unlearning theory, LLMs and foundation models, computer vision, recommender systems, privacy and compliance, and scalable systems — connecting these communities around the shared challenge of selective data removal.
What makes WIPE-OUT different: unlike traditional workshops, WIPE-OUT features pitch-style paper presentations, head-to-head paper comparison debates, and a closing roundtable bridging academia and industry — designed to maximize interaction and constructive discussion.
All deadlines are 11:59 pm AoE.
This machine unlearning workshop welcomes contributions on all topics related to machine unlearning across domains (e.g., finance, business, basic sciences, construction, computational advertising, medical, etc.) and independent of data types (e.g., networks, tabular, unstructured, graphs, logs, spatiotemporal, multimedia, time series, genomic sequences, and streaming data). Contributions can also include research or perspectives regarding the following:
We plan to have up to two keynote speakers from leading institutions in Machine Unlearning and Privacy Preservation. The invited speakers will be announced shortly — stay tuned!
|
Speaker's Bio:Sijia Liu is a Red Cedar Distinguished Associate Professor in the Department of Computer Science and Engineering at Michigan State University (MSU), and an Affiliated Professor at the MIT-IBM Watson AI Lab, IBM Research. His research centers on scalable and trustworthy AI, such as machine unlearning for vision and language models, scalable optimization for deep models, adversarial robustness, and data–model efficiency. He has received numerous honors, including the NSF CAREER Award (2024), INNS Aharon Katzir Young Investigator Award (2024), MSU Withrow Rising Scholar Award (2025), Best Paper Runner-Up at UAI (2022), and Best Student Paper Award at ICASSP (2017). He also co-founded the New Frontiers in Adversarial Machine Learning Workshop series (ICML/NeurIPS 2021–2024). |
WIPE-OUT 2026 will be a half-day event combining keynote presentations, thematic paper sessions with pitch-style talks, interactive “head-to-head” paper comparison debates, poster sessions, and a closing roundtable with experts from academia and industry.
From the submission system, select the “WIPE-OUT 2 Workshop on Machine Unlearning and Privacy Preservation” track to create a new submission.
We invite authors to submit unpublished, original papers written in English. Submitted papers should not have been previously published or accepted for publication in substantially similar form in any peer-reviewed venue, such as journals, conferences, or workshops. Papers should be prepared and submitted in the LNCS format using the Springer LNCS template.
We will consider three different submission types:
Unlimited appendix are accepted at submission time.
Submissions should not exceed the indicated pages, including any diagrams and references. All submissions will undergo a double-blind review process and be reviewed by at least three reviewers based on relevance to the workshop, novelty/originality, significance, technical quality and correctness, quality and clarity of presentation, quality of references, and reproducibility. Submitted papers will be rejected without review if they are not correctly anonymized, do not comply with the template, or do not follow the above guidelines.
Generative AI Usage Policy. Generative AI models, including Chat-GPT, BARD, LLaMA, or similar LLMs, do not satisfy the criteria for authorship of papers accepted in the workshop. If authors use an LLM in any part of the paper-writing process, they assume full responsibility for all content, including checking for plagiarism and correctness of all text.
Proceedings.
The accepted papers and the material generated during the meeting will be available on the workshop website. The workshop proceedings will be published as a joint post-workshop proceeding in Springer’s Communications in Computer and Information Science (CCIS), in 1-2 volumes organized by focused scope, indexed on Google Scholar, DBLP, and Scopus. Paper authors will have the faculty to opt-in or opt-out. The authors of selected papers may be invited to submit an extended version to a special journal issue.
Please be aware that at least one author of all accepted papers should register to the conference until the early registration deadline.
We expect the authors, the program committee, and the organizing committee to adhere to the ECML-PKDD Code of Conduct.
The Main Conference organization team will manage the registration: https://ecmlpkdd.org/2026/
Machine unlearning is the process of selectively removing the influence of specific training data points from a trained machine learning model, without retraining the model from scratch. It addresses the need to comply with data deletion regulations such as the GDPR's "right to be forgotten," to correct biases, and to remove sensitive or outdated information from deployed AI systems.
A dedicated machine unlearning workshop like WIPE-OUT brings together researchers and practitioners focused specifically on data removal, privacy preservation, and model editing. Unlike broader machine learning conferences, the workshop offers focused discussions, peer review from domain experts, and networking with the growing machine unlearning community.
WIPE-OUT 2026 is a workshop co-located with the ECML-PKDD 2026 conference. While it is not a standalone conference, it functions as a dedicated machine unlearning venue within one of Europe's top machine learning conferences. Accepted papers are published in Springer CCIS post-proceedings (LNCS format) and indexed on Google Scholar, DBLP, and Scopus.
Differential privacy limits what a model can learn about individual data points during training by adding noise. Machine unlearning, by contrast, operates after training — it removes or reduces the influence of specific data that the model has already learned. The two techniques are complementary: differential privacy provides proactive protection, while machine unlearning enables reactive data removal.
The workshop covers theoretical foundations of machine unlearning, unlearning in large language models and foundation models, concept erasure in computer vision and generative models, unlearning in recommender systems, privacy-preserving techniques and regulatory compliance, scalable systems for efficient unlearning, evaluation metrics and benchmarks, and the ethical, legal, and societal implications of machine unlearning.
For general inquiries about the machine unlearning workshop, please email andrea.dangelo6@graduate.univaq.it, claudio.savelli@polito.it, flavio.giobergia@polito.it, gullof@acm.org, and gstilo@luiss.it.