CS 707: Data and Web Science Seminar (HWS 2020)

The Data and Web Science seminar covers recent topics in data and web science. This term the seminar focuses on Explainable AI (XAI).

Organization

  • This seminar is organized by Prof. Dr. Rainer Gemulla and Daniel Ruffinelli.
  • Available for Master students (2 SWS, 4 ECTS). If you are a Bachelor student and want to take this seminar (2 SWS, 5 ECTS), please contact Prof. Gemulla.
  • Prerequisites: solid background in machine learning
  • Maximum number of participants is 10 students

Goals

In this seminar, you will

  • Read, understand, and explore scientific literature
  • Summarize a current research topic in a concise report (10 single-column pages + references)
  • Give two presentations about your topic (3 minutes flash presentation, 15 minutes final presentation)
  • Moderate a scientific discussion about the topic of one of your fellow students
  • Review a (draft of a) report of a fellow student

Schedule

  • Register as described below.
  • Attend the kickoff meeting on October 7th, 17:15, in A5 6, C015.
  • Work individually throughout the semester according to the seminar schedule.
  • Meet your advisor for guidance and feedback.

Registration

If you are an MSc student, register via Portal 2 until September 28th. If you are a BSC student, register via email to Daniel Ruffinelli until September 28th.

If you are accepted into the seminar, provide at least 4 topics of your preference (your own and/or example topics; see below) by October 4 via email to Daniel Ruffinelli. The actual topic assignment takes place soon afterwards; we will notify you via email. Our goal is to assign one of your preferred topics to you.

Topics

Each student works on a topic within the area of the seminar along with a accompanying reference paper. Your presentation and report should explore the topic with an emphasis on the reference paper, but not just the reference paper.

We provide example topics and reference papers below. These topics follows the taxonomy of a recent XAI survey by Arrieta et al. (2020), which see (Fig. 6). Each topic is associated with an example reference paper. If you want, you may suggest a different reference paper (let us know after the topic assignments) or a different topic within the XAI area (talk to use before the topic assignments). Other recent surveys include the one by Das and Rand (2020, preprint) and by Došilović et al. (2018). See also Google scholar.

1. Introduction (by us)
Arrieta et al.
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI (Ch. 1-2)
Information Fusion, 2020

2. Transparent models (BSc students only)
Arrieta et al.
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI (Ch. 3)
Information Fusion, 2020

3. XAI: Challenges and Opportunities (BSc students preferred)
Arrieta et al.
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI (Ch. 5)
Information Fusion, 2020

4. Model-agnostic explanations by simplification
Ribeiro et al.
Why should I trust you? Explaining the predictions of any classifier
ACM SIGKDD, 2016

5. Model-agnostic local explanations
Guidotti et al.
Local Rule-Based Explanations of Black Box Decision Systems
Preprint, 2018

6. Model-agnostic feature relevance explanations
Lundberg and Lee
A Unified Approach to Interpreting Model Predictions
NIPS, 2017

7. Model-agnostic visual explanations
Goldstein et al.
Peeking Inside the Black Box: Visualizing Statistical Learning With Plots of Individual Conditional Expectation
Journal of Computational and Graphical Statistics, 2015

8. Explanations for ensembles and multiple classifier systems
Tolomei et al.
Interpretable Predictions of Tree-based Ensembles via Actionable Feature Tweaking
ACM SIGKDD, 2017

9. Explanations for support vector machines
Barakat and Bradley
Rule Extraction from Support Vector Machines: A Sequential Covering Approach
IEEE TKDE, 2007

10. Explanations for multi-layer neural networks
Sundararajan et al.
Axiomatic Attribution for Deep Networks
ICML, 2017

11. Explanations for convolutional neural networks I: Outputs
Selvaraju et al.
Grad-CAM: Visual Explanations From Deep Networks via Gradient-Based Localization
ICCV, 2017

12. Explanations for convolutional neural networks II: Intermediate layers
Mahendran and Vedaldi
Understanding Deep Image Representations by Inverting Them
CVPR, 2015

13. Explanations for recurrent neural networks
Jain and Wallace
Attention is not Explanation
NAACL, 2019

and

Wiegreffe and Pinter
Attention is not not Explanation
EMNLP-IJCNLP, 2019

 

Supplementary materials and references