Vier Studierende stehen in der Eingangshalle des B6-Gebäudes

CS 717: Seminar on Explainable AI Methods

This seminar will cover Explainable AI methods, with particular emphasis on Concept Bottleneck Models (CBMs), their architecture, and practical applications.
It will also explore how CBMs enhance interpretability by linking high-level human concepts with machine learning predictions.

Organization

  • This seminar is organized by Prof. Dr.-Ing. Margret Keuper.
  • Prerequisites: Background in Machine Learning.
  • Maximum number of participants is 6 Masters students.

Goals

In this seminar, you will

  • Read, understand, and explore scientific literature
  • Summarize a current research topic in a concise report (10 single-column pages + references)
  • Give two presentations about your topic (3 minutes flash presentation, 15 minutes final presentation)
  • Moderate a scientific discussion about the topic of one of your fellow students
  • Review a (draft of a) report of a fellow student

Registration

Please register via Portal2 and email your list of preferred papers (given below) by February 24 (at least four choices) to Mishal Fatima at mishal.fatima@uni-mannheim.de. If you do not provide your preferences by the deadline, we will assign a topic randomly.

The actual topic assignment will take place shortly afterward, and we will notify you via email.

Our goal is to assign one of your preferred areas of work. Please note that preferences will be allocated on a first come, first served basis.

Kick-Off Meeting

The kick-off meeting will take place on 24. February 2026 at 17:15. 

Seminar Schedule

Topics

Each student works on a topic within the area of the seminar along with an accompanying reference paper. Your presentation and report should explore the topic with an emphasis on the reference paper, but not just the reference paper.

We provide example topics and reference papers below.

Topic List: 

[1] Hybrid Concept Bottleneck Models

[2] Post-hoc Concept Bottleneck Models

[3] Editable Concept Bottleneck Models

[4] Semi-supervised Concept Bottleneck Models

[5] Multimodal Concept Bottleneck Models

[6] TabCBM: Concept-based Interpretable Neural Networks for Tabular Data

[7] Show and Tell: Visually Explainable Deep Neural Nets via Spatially-Aware Concept Bottleneck Models

[8] Partially Shared Concept Bottleneck Models