Vier Studierende stehen in der Eingangshalle des B6-Gebäudes

SM 459: Seminar on Explainable AI Methods (HWS 2026)

This seminar will cover classical attribution methods in Explainable AI, including techniques such as saliency maps, feature importance, and gradient-based explanations.
It will focus on how these methods help interpret and analyze the decision-making process of machine learning models.

Organization

  • This seminar is organized by Prof. Dr.-Ing. Margret Keuper
  • Available for Bachelor Students
  • Prerequisites:  Basic understanding in Machine Learning
  • Maximum number of participants is 6 students

Goals

In this seminar, you will

  • Read, understand, and discuss a basic topic relevant within computer vision
  • Summarize this topic in a concise report (10 single-column pages + references)
  • Give two presentations about your topic (3 minutes flash presentation, 15 minutes presentation)
  • Moderate a scientific discussion about the topic of one of your fellow students
  • Review a (draft of a) report of a fellow student

Kick-Off Meeting

The kick-off meeting will take place on 24. February 2026 at 17:15. 

Registration

Please register via Portal2 and email your list of preferred papers (given below) by February 17 (at least four choices) to Mishal Fatima at mishal.fatima@uni-mannheim.de. If you do not provide your preferences by the deadline, we will assign a topic randomly.

The actual topic assignment will take place shortly afterward, and we will notify you via email.

Our goal is to assign one of your preferred areas of work. Please note that preferences will be allocated on a first come, first served basis.

Seminar Schedule

Topics

Each student works on a topic within the area of the seminar along with an accompanying reference paper. Your presentation and report should explore the topic with an emphasis on the reference paper, but not just the reference paper.

We provide example topics and reference papers below.

[1] Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization

[2] Visualizing and Understanding Convolutional Networks

[3] This Looks Like That: Deep Learning for Interpretable Image Recognition

[4] On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation

[5] Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks

[6] B-cos Networks: Alignment is All We Need for Interpretability