CS 717: Seminar on Computer Vision

The Computer Vision seminar covers recent topics in computer vision. In HWS2020, the seminar will be on Adversarial Attacks and Robustness.


  • This seminar is organized by Prof. Dr.-Ing. Margret Keuper
  • Available for Master students (2 SWS, 4 ECTS)
  • Prerequisites: solid background in machine learning
  • Maximum number of participants is 10 students


In this seminar, you will

  • Read, understand, and explore scientific literature
  • Summarize a current research topic in a concise report (10 single-column pages + references)
  • Give two presentations about your topic (3 minutes flash presentation, 15 minutes final presentation)
  • Moderate a scientific discussion about the topic of one of your fellow students
  • Review a (draft of a) report of a fellow student


  • Register as described below.
  • Attend the kickoff meeting on October 8, 5.15 pm. The kickoff meeting will be held via zoom - see Portal2 for the link.
  • Work individually throughout the semester according to the seminar schedule.
  • Meet your advisor for guidance and feedback.

Flash Presentations

Session on the 29th of October, 5.15pm

1. Adversarial Attacks ane Defences: A Survey, Chakraborty et al., 2018.

2. Generating Natural Adversarial Examples, Zhao et al.,  ICLR 2018.

3.  Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods,  Carlini and Wagner, AISec 2017.

4. Towards the First Adversarially Robust Neural Network Model on MNIST, Schott et al., ICLR 2019.

5. Disentangling adversarial robustness and generalization, Stutz et al., CVPR 2019.

6. A Fourier Perspective on Model Robustness, Yin et al., NeurIPS 2019.

7. Theoretically Principled Trade-off between Robustness and Accuracy, Zhang et al., ICML 2019.




Register via Portal 2 until September 28.

If you are accepted into the seminar, provide at least 3 topics of your preference (your own and/or example topics; see below) by October 5 viaemail to Margret Keuper. The actual topic assignment takes place soon afterwards; we will notify you via email. Our goal is to assign one of your preferred topics to you.


Each student works on a topic within the area of the seminar along with a accompanying reference paper. Your presentation and report should explore the topic with an emphasis on the reference paper, but not just the reference paper.

We strongly encourage you to explore the available literature and suggest a topic and reference paper of your own choice. Reference papers should be strong papers from a major venue; contact us if you are unsure.

We provide example topics and reference papers below .


Topic List:

1. Chakraborty, A., Alam, M., Dey, V., Chattopadhyay, A., & Mukhopadhyay, D. (2018). Adversarial Attacks and Defences: A Survey. ArXiv, abs/1810.00069.

2. Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, Le Song (2018): Adversarial Attack on Graph Structured Data, ICML 2018,

3. Anish Athalye · Nicholas Carlini · David Wagner (2018): Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , ICML 2018.

4. Zhengli Zhao, Dheeru Dua, Sameer Singh (2018),  Generating Natural Adversarial Examples, ICLR 2018

5. Nicholas Carlini and David Wagner (2017). Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (AISec '17). Association for Computing Machinery, New York, NY, USA, 3–14.

6. Y. Dong et al., (2020) "Benchmarking Adversarial Robustness on Image Classification,“ 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 2020, pp. 318-328, doi: 10.1109/CVPR42600.2020.00040.

7. Eric Wong, J. Zico Kolter. Provable Defenses against Adversarial Examples via the Convex Outer Adversarial Polytope. ICML, 2018.

8. Zhang, H., Yu, Y., Jiao, J., Xing, E., Ghaoui, L.E. & Jordan, M.. (2019). Theoretically Principled Trade-off between Robustness and Accuracy. Proceedings of the 36th International Conference on Machine Learning, in PMLR 97:7472-7482,

9. Yin, Dong & Lopes, Raphael & Shlens, Jonathon & Cubuk, Ekin & Gilmer, Justin. (2019). A Fourier Perspective on Model Robustness in Computer Vision. , NeurIPS, 2019




Writing for Computer Science by Justin Zobel, Springer, 2014