Unveiling the Anatomy of Adversarial Attacks: Concept-Based XAI Dissection of CNNs

TitleUnveiling the Anatomy of Adversarial Attacks: Concept-Based XAI Dissection of CNNs
Publication TypeConference Paper
Year of Publication2024
AuthorsMikriukov, G, Schwalbe, G, Motzkus, F, Bade, K
EditorLongo, L, Lapuschkin, S, Seifert, C
Conference NameExplainable Artificial Intelligence - Second World Conference, xAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part I
SeriesCommunications in Computer and Information Science
Volume2153
PublisherSpringer
Abstract

Adversarial attacks (AAs) pose a significant threat to the reliability and robustness of deep neural networks. While the impact of these attacks on model predictions has been extensively studied, their effect on the learned representations and concepts within these models remains largely unexplored. In this work, we perform an in-depth analysis of the influence of AAs on the concepts learned by convolutional neural networks (CNNs) using eXplainable artificial intelligence (XAI) techniques. Through an extensive set of experiments across various network architectures and targeted AA techniques, we unveil several key findings. First, AAs induce substantial alterations in the concept composition within the feature space, introducing new concepts or modifying existing ones. Second, the adversarial perturbation operation itself can be linearly decomposed into a global set of latent vector components, with a subset of these being responsible for the attack’s success. Notably, we discover that these components are target-specific, i.e., are similar for a given target class throughout different AA techniques and starting classes. Our findings provide valuable insights into the nature of AAs and their impact on learned representations, paving the way for the development of more robust and interpretable deep learning models, as well as effective defenses against adversarial threats.

URLhttps://doi.org/10.1007/978-3-031-63787-2_6
DOI10.1007/978-3-031-63787-2_6
Bibtex: 
@inproceedings {1471,
	title = {Unveiling the Anatomy of Adversarial Attacks: Concept-Based XAI Dissection of CNNs},
	booktitle = {Explainable Artificial Intelligence - Second World Conference, xAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part I},
	series = {Communications in Computer and Information Science},
	volume = {2153},
	year = {2024},
	publisher = {Springer},
	organization = {Springer},
	abstract = {<p>Adversarial attacks (AAs) pose a significant threat to the reliability and robustness of deep neural networks. While the impact of these attacks on model predictions has been extensively studied, their effect on the learned representations and concepts within these models remains largely unexplored. In this work, we perform an in-depth analysis of the influence of AAs on the concepts learned by convolutional neural networks (CNNs) using eXplainable artificial intelligence (XAI) techniques. Through an extensive set of experiments across various network architectures and targeted AA techniques, we unveil several key findings. First, AAs induce substantial alterations in the concept composition within the feature space, introducing new concepts or modifying existing ones. Second, the adversarial perturbation operation itself can be linearly decomposed into a global set of latent vector components, with a subset of these being responsible for the attack{\textquoteright}s success. Notably, we discover that these components are target-specific, i.e., are similar for a given target class throughout different AA techniques and starting classes. Our findings provide valuable insights into the nature of AAs and their impact on learned representations, paving the way for the development of more robust and interpretable deep learning models, as well as effective defenses against adversarial threats.</p>
},
	doi = {10.1007/978-3-031-63787-2_6},
	url = {https://doi.org/10.1007/978-3-031-63787-2_6},
	author = {Georgii Mikriukov and Gesina Schwalbe and Franz Motzkus and Korinna Bade},
	editor = {Luca Longo and Sebastian Lapuschkin and Christin Seifert}
}