Seminar CS717: Seminar on Computer Vision (HWS 2024)
The Computer Vision seminar covers recent topics in computer vision. In HWS2024, the seminar will focus on “AI Safety & Robustness”, critically exploring the limitations of machine learning models and discussing defences against model vulnerabilities. Topics include model stealing, watermarking, backdoor attacks and detection, and adversarial attacks on vision-language models.
Organization
- This seminar is organized by Prof. Dr.-Ing. Margret Keuper
- Available for Master students (2 SWS, 4 ECTS)
- Prerequisites: solid background in machine learning
- Maximum number of participants is 12 students
Goals
In this seminar, you will
- Read, understand, and explore scientific literature
- Summarize a current research topic in a concise report (10 single-column pages + references)
- Give two presentations about your topic (3 minutes flash presentation, 15 minutes final presentation)
- Moderate a scientific discussion about the topic of one of your fellow students
- Review a (draft of a) report of a fellow student
Schedule
- Register as described below.
- Attend the kickoff meeting on 17th of September. [Meeting Slides]
- Work individually throughout the semester according to the seminar schedule. [Time Schedule]
- Meet your advisor for guidance and feedback.
- Flash Presentations: 29th of October (tentative)
- Final Presentations: 3rd of December (tentative)
Topics
Each student works on a topic within the area of the seminar along with an accompanying reference paper. Your presentation and report should explore the topic with an emphasis on the reference paper, but not just the reference paper.
We strongly encourage you to explore the available literature and suggest a topic and reference paper of your own choice. Reference papers should be strong papers from a major venue; contact us if you are unsure.
We provide example topics and reference papers below.
Topic List:
[1] Ethical-Lens: Curbing Malicious Usages of Open-Source Text-to-Image Models
[2] Coercing LLMs to do and reveal (almost) anything
[3] Are aligned neural networks adversarially aligned?
[5] Red-Teaming Segment Anything Model
[6] SECURITYNET: Assessing Machine Learning Vulnerabilities on Public Models
[7] BackdoorBench: A Comprehensive Benchmark of Backdoor Learning
[8] Finding Naturally Occurring Physical Backdoors in Image Datasets
[10] Safe-CLIP: Removing NSFW Concepts from Vision-and-Language Models
Getting started
The following survey articles and tutorial are good starting points for getting an overview of the topics of the seminar:
- Concrete Problems in AI Safety, Amodei et al. 2016
- Unsolved Problems in ML Safety, Hendrycks et al. 2022