Survey on Graph Counterfactual Explainability Accepted in ACM CSUR

Survey on Graph Counterfactual Explainability Accepted in ACM CSUR

September 02, 2023

Embarking on a research journey is akin to setting sail on uncharted waters. It’s a path that can be long, challenging, yet incredibly meaningful. Today, I want to share with you the incredible voyage I’ve had the privilege to undertake, delving deep into the realms of Graph Neural Networks (GNNs) and Counterfactual Explanations (CE).

It was undoubtedly a hard and long journey, but its profound meaning and potential impact made it a voyage worth every challenge encountered. This journey was far from a solo endeavor. Traveling with Mario Alfonso Prado-Romero, Bardh Prenkaj, and Fosca Giannotti was a pleasure beyond words, and I eagerly anticipate future adventures in the ever-evolving landscape of GNNs and CE. Their expertise and camaraderie added an invaluable dimension to the exploration, and it’s an experience I genuinely hope to replicate in the future. Together, we’ll continue to illuminate the path toward transparency and understanding in the world of AI and machine learning.

The detailed travel report is accessible here.

Use the following BibTeX to cite our paper.

    author = {Prado-Romero, Mario Alfonso and Prenkaj, Bardh and Stilo, Giovanni and Giannotti, Fosca},
    title = {A Survey on Graph Counterfactual Explanations: Definitions, Methods, Evaluation, and Research Challenges},
    year = {2023},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    issn = {0360-0300},
    url = {},
    doi = {10.1145/3618105},
    journal = {ACM Computing Surveys},
    month = {sep}

Charting the Course ( GNNs and CE )

Our journey commenced with a fundamental understanding of Graph Neural Networks (GNNs). Counterfactual Explanations (CE) serve as a torchbearer, illuminating the black-box nature of the GNNs and enhancing their transparency.

The Knowledge Lighthouse ( State-of-the-Art Analysis )

To navigate this complex landscape, we embarked on a comprehensive analysis of the State of the Art (SoA). We surveyed existing research and methodologies, seeking to map the uncharted territories of CE for GNNs.

Equipping for the Journey ( Tools and Resources )

We equipped ourselves with essential tools, including a meticulously crafted taxonomy and a uniform notation made by us. These resources served as our compass and map.

Setting the Course ( Benchmarking Datasets and Metrics )

We understood the need for standardized evaluation. Thus, we diligently organize benchmarking datasets and evaluation metrics.

Exploring Uncharted Waters ( Methods, Datasets, and Metrics )

Our voyage was marked by the exploration of fourteen different CE methods, each with its own unique evaluation protocols: we delved deep, considering twenty-two datasets and nineteen metrics.

The GRETEL Library ( Our Port of Call )

In our quest to make our findings accessible to fellow explorers, we integrated the majority of these methods into the GRETEL library.

Putting Theory to the Test ( Empirical Evaluation )

To truly understand the strengths and pitfalls of CE methods for GNNs, we conducted empirical evaluations. It was a hands-on approach, allowing us to see how these methods performed in the real world.

Charting New Horizons ( Open Challenges and Future Work ) We’ve identified open challenges and areas for future research, signifying that our expedition is far from over.

Back to all news …