Seminar on Explainable Artificial Intelligence (FSS 2022)
With the superior results achieved by black box machine learning techniques, in particular neural networks, there has been an increasing demand for understanding how an artificial intelligence (AI) system achieves its results and decisions. This sparked the field of explainable artificial intelligence (also coined explainable AI or XAI).
Goals
In this seminar, you will familiarize yourself with recent advancements in the field of explainable artificial intelligence. You will read research papers as well as conduct own experiments where applicable, and you will discuss the insights with the other participants of the seminar.
As a participant, you are supposed to introduce a particular XAI technique and present it to the seminar participants. Each seminar paper undergoes a peer review process in the seminar. Presentations are supposed to be about 25 minutes long.
Organization
This seminar is organized by Prof. Dr. Heiko Paulheim
Available for Master students (2 SWS, 4 ECTS)
Prerequisites: none
Additional resources:
Schedule
For now, we plan with an on-campus seminar.
- February 24th, 13:45–15:15: kick off meeting: B6 23-25, A302 Slides (PDF, 3 MB)
- March 1st: topic assignment
- March 30th: paper draft due
- April 14th: peer review due
- April 28th, May 5th, 12th, 19th: 13:45–17:00 presentations and discussions (see schedule below), B6 23-25, A303
- June 18th: final paper due
Registration
- Registration will be available in Portal2
- Note that we are planning to conduct this seminar as an in person event, so that you should only select it if you are planning to be in Mannheim in the spring term
- After the kick off session, please send a ranked list of three topics you would like to read and present in the seminar to Bianca Lermer
- Final assignment of topics will be made after the kick off meeting
Presentation Schedule
There will be four dates with 2–3 presentations each:
- 28.4.: (1) Surrogate Models, (2) Model Distillation, (3) Local interpretable model-agnostic explanations (LIME)
- 5.5.: (1) Rule Extraction from Neural Networks, (2) Layer-wise Relevance Propagation
- 12.5.: (1) Feature Importance, (2) Shapely Values
- 19.5.: (1) Saliency Maps, (2) Counterfactual Explanations, (3) Prototypes and Criticisms
Topics
Note: the topic list below does contains one literature pointer per topic. These papers are examples, but they are not exhaustive, i.e., it is part of your task to collect more papers on the topic.
Recommended Reading
The following articles provide introductions and surveys for the topic and are recommended for all seminar participants to read:
- Amina Adadi and Mohammed Berrada: Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI), 2018
- Plamen P. Angelov et al.: Explainable artificial intelligence: an analytical review, 2021
- Vanessa Buhrmester et al.: Analysis of explainers of black box deep neural networks for computer vision: A survey, 2021
- Miruna A. Clinciu and Helen F. Hastie: A Survey of Explainable AI Terminology, 2019
- Roberto Confalonieri et al.: A historical perspective of explainable Artificial Intelligence, 2020
- Derek Doran et al.: What Does Explainable AI Really Mean? A New Conceptualization of Perspectives, 2017
- Doshi-Velez and Kim: Towards a Rigorous Science of Interpretable Machine Learning, 2017
- Randy Goebel et al.: Explainable AI: The New 42?, 2018
- Zachary C. Lipton: The Mythos of Model Interpretability, 2016
- Gabriëlle Ras et al.: Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges, 2018