Counterfactual Attacks with Semantic Guidance

Counterfactual Attacks with Semantic Guidance

Overview

Explainable AI (XAI) seeks to tackle the opacity of deep neural network decisions. Moving beyond the conventional focus on 2D imagery ([1],[2]), our research provides the first method to provide Counterfactual Explanations (CEs) for 3D point cloud classifiers. Specifically, we propose for the first time a method for creating counterfactuals with explicit semantic guidance.

Objectives

  • Enhancing the performance of the semantic guidance by developing a new optimization algorithm.
  • Extent the semantic guidance to a wider range of counterfactual generators.

Prerequisite

  • Python + PyTorch proficiency
  • Experience with Kubernetes + Docker
  • Knowledge of deep learning 3D generation pipelines (VAE, GANs, Diffusion, etc), especially for point clouds data.

Contact

References

[1] Rodriguez, P., Caccia, M., Lacoste, A., Zamparo, L., Laradji, I., Charlin, L., & Vazquez, D. (2021). Beyond Trivial Counterfactual Explanations with Diverse Valuable Explanations_. arXiv preprint arXiv:2103.10226_.
[2] Mehdi Zemni, Mickaël Chen, Έloi Zablocki, Hedi Ben-Younes, Patrick Pérez, & Matthieu Cord (2023). OCTET: Object-aware Counterfactual Explanations. In _IEEE Conference on Computer Vision and Pattern Recognition, CVPR.