Junior Research Group Correctable Hybrid Artificial Intelligence (chAI)
A project funded by the German Federal Ministry of Education and Research, funding code 01IS24058
Contact: Dr. Gesina Schwalbe (group lead)
Motivation, Background, and Problem StatementData-driven artificial intelligence (AI) methods, especially deep neural networks (DNNs), hold great potential for automating processes in safety-critical applications. Examples include environmental perception, such as camera-based perception in robotics or autonomous driving. However, to ensure the safe application of these methods, their verifiability and correctability must be guaranteed. This also applies to symbolic knowledge, i.e., knowledge describable in natural or formal language, such as traffic rules ("If the light is red, stop") or known object relationships ("A head belongs to a person"). Unfortunately, current DNNs fail to meet these requirements: these purely statistical models are large and opaque, offering insufficient guarantees for verifying learned knowledge or integrating and correcting symbolic knowledge effectively. Objective and ApproachThe junior research group Correctable Hybrid Artificial Intelligence (chAI) aims to develop methods that allow symbolic knowledge to be (1) verified, (2) specifically corrected within the DNN (→ correctable), and (3) incorporated during training and design (→ hybrid). This will involve leveraging methods
|
Project Structure
The project will explore three different intervention points for integrating knowledge into a DNN across three doctoral research projects:
- Through information in the DNN’s intermediate outputs (e.g., “Does the DNN ‘know’ what a head is?”),
- Through the internal structure of the DNN’s components and connections (e.g., Where and how can traffic rules be represented in the connections?), and
- Through the hybridization with classical symbolic algorithms or knowledge bases (e.g., How can a DNN serve as a language interface for classical robot planning?).
The Team
- Team Lead: Dr. Gesina Schwalbe
- PhD students:
- Raik Dankworth (since Nov 2024)
- Sparsh Tiwari (since Oct 2024)
- upcoming
- Student assistants:
- Marvin Keller (since Feb 2025)
- Xaver Pilgrimm (since Mar 2025)
- Martin Stuwe (since Mar 2025)
- News
- Research
- Teaching
- Staff
- Martin Leucker
- Diedrich Wolter
- Ulrike Schräger-Ahrens
- Mahmoud Abdelrehim
- Aliyu Ali
- Phillip Bende
- Moritz Bayerkuhnlein
- Marc Bätje
- Tobias Braun
- Gerhard Buntrock
- Raik Dankworth
- Anja Grotrian
- Raik Hipler
- Elaheh Hosseinkhani
- Frauke Kerlin
- Karam Kharraz
- Mohammad Khodaygani
- Ludwig Pechmann
- Waqas Rehan
- Martin Sachenbacher
- Andreas Schuldei
- Inger Struve
- Annette Stümpel
- Gesina Schwalbe
- Tobias Schwartz
- Daniel Thoma
- Sparsh Tiwari
- Lars Vosteen
- Open Positions
- Contact