Unveiling the Anatomy of Adversarial Attacks: Concept-Based XAI Dissection of CNNs
Title | Unveiling the Anatomy of Adversarial Attacks: Concept-Based XAI Dissection of CNNs |
Publication Type | Conference Paper |
Year of Publication | 2024 |
Authors | Mikriukov, G, Schwalbe, G, Motzkus, F, Bade, K |
Editor | Longo, L, Lapuschkin, S, Seifert, C |
Conference Name | Explainable Artificial Intelligence - Second World Conference, xAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part I |
Series | Communications in Computer and Information Science |
Volume | 2153 |
Publisher | Springer |
Abstract | Adversarial attacks (AAs) pose a significant threat to the reliability and robustness of deep neural networks. While the impact of these attacks on model predictions has been extensively studied, their effect on the learned representations and concepts within these models remains largely unexplored. In this work, we perform an in-depth analysis of the influence of AAs on the concepts learned by convolutional neural networks (CNNs) using eXplainable artificial intelligence (XAI) techniques. Through an extensive set of experiments across various network architectures and targeted AA techniques, we unveil several key findings. First, AAs induce substantial alterations in the concept composition within the feature space, introducing new concepts or modifying existing ones. Second, the adversarial perturbation operation itself can be linearly decomposed into a global set of latent vector components, with a subset of these being responsible for the attack’s success. Notably, we discover that these components are target-specific, i.e., are similar for a given target class throughout different AA techniques and starting classes. Our findings provide valuable insights into the nature of AAs and their impact on learned representations, paving the way for the development of more robust and interpretable deep learning models, as well as effective defenses against adversarial threats. |
URL | https://doi.org/10.1007/978-3-031-63787-2_6 |
DOI | 10.1007/978-3-031-63787-2_6 |
@inproceedings {1471, title = {Unveiling the Anatomy of Adversarial Attacks: Concept-Based XAI Dissection of CNNs}, booktitle = {Explainable Artificial Intelligence - Second World Conference, xAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part I}, series = {Communications in Computer and Information Science}, volume = {2153}, year = {2024}, publisher = {Springer}, organization = {Springer}, abstract = {<p>Adversarial attacks (AAs) pose a significant threat to the reliability and robustness of deep neural networks. While the impact of these attacks on model predictions has been extensively studied, their effect on the learned representations and concepts within these models remains largely unexplored. In this work, we perform an in-depth analysis of the influence of AAs on the concepts learned by convolutional neural networks (CNNs) using eXplainable artificial intelligence (XAI) techniques. Through an extensive set of experiments across various network architectures and targeted AA techniques, we unveil several key findings. First, AAs induce substantial alterations in the concept composition within the feature space, introducing new concepts or modifying existing ones. Second, the adversarial perturbation operation itself can be linearly decomposed into a global set of latent vector components, with a subset of these being responsible for the attack{\textquoteright}s success. Notably, we discover that these components are target-specific, i.e., are similar for a given target class throughout different AA techniques and starting classes. Our findings provide valuable insights into the nature of AAs and their impact on learned representations, paving the way for the development of more robust and interpretable deep learning models, as well as effective defenses against adversarial threats.</p> }, doi = {10.1007/978-3-031-63787-2_6}, url = {https://doi.org/10.1007/978-3-031-63787-2_6}, author = {Georgii Mikriukov and Gesina Schwalbe and Franz Motzkus and Korinna Bade}, editor = {Luca Longo and Sebastian Lapuschkin and Christin Seifert} }
- News
- Research
- Teaching
- Staff
- Martin Leucker
- Diedrich Wolter
- Ulrike Schräger-Ahrens
- Mahmoud Abdelrehim
- Aliyu Ali
- Phillip Bende
- Moritz Bayerkuhnlein
- Marc Bätje
- Tobias Braun
- Gerhard Buntrock
- Raik Dankworth
- Anja Grotrian
- Raik Hipler
- Elaheh Hosseinkhani
- Frauke Kerlin
- Karam Kharraz
- Mohammad Khodaygani
- Ludwig Pechmann
- Waqas Rehan
- Martin Sachenbacher
- Andreas Schuldei
- Inger Struve
- Annette Stümpel
- Gesina Schwalbe
- Tobias Schwartz
- Daniel Thoma
- Sparsh Tiwari
- Lars Vosteen
- Open Positions
- Contact