Concept-Based Explanations in Computer Vision: Where Are We and Where Could We Go?
Title | Concept-Based Explanations in Computer Vision: Where Are We and Where Could We Go? |
Publication Type | Journal Article |
Year of Publication | 2024 |
Authors | Lee, JHee, Mikriukov, G, Schwalbe, G, Wermter, S, Wolter, D |
Journal | CoRR |
Volume | abs/2409.13456 |
Abstract | Concept-based XAI (C-XAI) approaches to explaining neural vision models are a promising field of research, since explanations that refer to concepts (i.e., semantically meaningful parts in an image) are intuitive to understand and go beyond saliency-based techniques that only reveal relevant regions. Given the remarkable progress in this field in recent years, it is time for the community to take a critical look at the advances and trends. Consequently, this paper reviews C-XAI methods to identify interesting and underexplored areas and proposes future research directions. To this end, we consider three main directions: the choice of concepts to explain, the choice of concept representation, and how we can control concepts. For the latter, we propose techniques and draw inspiration from the field of knowledge representation and learning, showing how this could enrich future C-XAI research. |
URL | https://doi.org/10.48550/arXiv.2409.13456 |
DOI | 10.48550/ARXIV.2409.13456 |
Bibtex:
@article {1473, title = {Concept-Based Explanations in Computer Vision: Where Are We and Where Could We Go?}, journal = {CoRR}, volume = {abs/2409.13456}, year = {2024}, abstract = {Concept-based XAI (C-XAI) approaches to explaining neural vision models are a promising field of research, since explanations that refer to concepts (i.e., semantically meaningful parts in an image) are intuitive to understand and go beyond saliency-based techniques that only reveal relevant regions. Given the remarkable progress in this field in recent years, it is time for the community to take a critical look at the advances and trends. Consequently, this paper reviews C-XAI methods to identify interesting and underexplored areas and proposes future research directions. To this end, we consider three main directions: the choice of concepts to explain, the choice of concept representation, and how we can control concepts. For the latter, we propose techniques and draw inspiration from the field of knowledge representation and learning, showing how this could enrich future C-XAI research. }, doi = {10.48550/ARXIV.2409.13456}, url = {https://doi.org/10.48550/arXiv.2409.13456}, author = {Jae Hee Lee and Georgii Mikriukov and Gesina Schwalbe and Stefan Wermter and Diedrich Wolter} }
- News
- Research
- Teaching
- Staff
- Martin Leucker
- Diedrich Wolter
- Ulrike Schräger-Ahrens
- Mahmoud Abdelrehim
- Aliyu Ali
- Phillip Bende
- Moritz Bayerkuhnlein
- Marc Bätje
- Tobias Braun
- Gerhard Buntrock
- Raik Dankworth
- Anja Grotrian
- Raik Hipler
- Elaheh Hosseinkhani
- Frauke Kerlin
- Karam Kharraz
- Mohammad Khodaygani
- Ludwig Pechmann
- Waqas Rehan
- Martin Sachenbacher
- Andreas Schuldei
- Inger Struve
- Annette Stümpel
- Gesina Schwalbe
- Tobias Schwartz
- Daniel Thoma
- Sparsh Tiwari
- Lars Vosteen
- Open Positions
- Contact