The Computer Vision seminar covers recent topics in computer vision. In HWS2020, the seminar will be on Adversarial Attacks and Robustness.
In this seminar, you will
1. Adversarial Attacks ane Defences: A Survey, Chakraborty et al., 2018.
2. Generating Natural Adversarial Examples, Zhao et al., ICLR 2018.
3. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods, Carlini and Wagner, AISec 2017.
4. Towards the First Adversarially Robust Neural Network Model on MNIST, Schott et al., ICLR 2019.
5. Disentangling adversarial robustness and generalization, Stutz et al., CVPR 2019.
6. A Fourier Perspective on Model Robustness, Yin et al., NeurIPS 2019.
7. Theoretically Principled Trade-off between Robustness and Accuracy, Zhang et al., ICML 2019.
Register via Portal 2 until September 28th.
If you are accepted into the seminar, provide at least 3 topics of your preference (your own and/
Each student works on a topic within the area of the seminar along with a accompanying reference paper. Your presentation and report should explore the topic with an emphasis on the reference paper, but not just the reference paper.
We strongly encourage you to explore the available literature and suggest a topic and reference paper of your own choice. Reference papers should be strong papers from a major venue; contact us if you are unsure.
We provide example topics and reference papers below .
1. Chakraborty, A., Alam, M., Dey, V., Chattopadhyay, A., & Mukhopadhyay, D. (2018). Adversarial Attacks and Defences: A Survey. ArXiv, abs/1810.00069.
2. Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, Le Song (2018): Adversarial Attack on Graph Structured Data, ICML 2018,
3. Anish Athalye · Nicholas Carlini · David Wagner (2018): Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , ICML 2018.
4. Zhengli Zhao, Dheeru Dua, Sameer Singh (2018), Generating Natural Adversarial Examples, ICLR 2018
5. Nicholas Carlini and David Wagner (2017). Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (AISec '17). Association for Computing Machinery, New York, NY, USA, 3–14.
6. Y. Dong et al., (2020) "Benchmarking Adversarial Robustness on Image Classification,“ 2020 IEEE/
7. Eric Wong, J. Zico Kolter. Provable Defenses against Adversarial Examples via the Convex Outer Adversarial Polytope. ICML, 2018.
8. Zhang, H., Yu, Y., Jiao, J., Xing, E., Ghaoui, L.E. & Jordan, M.. (2019). Theoretically Principled Trade-off between Robustness and Accuracy. Proceedings of the 36th International Conference on Machine Learning, in PMLR 97:7472-7482,
9. Yin, Dong & Lopes, Raphael & Shlens, Jonathon & Cubuk, Ekin & Gilmer, Justin. (2019). A Fourier Perspective on Model Robustness in Computer Vision. , NeurIPS, 2019
Tracking ist derzeit zugelassen.
Tracking ist derzeit nicht zugelassen.