Neural networks are increasingly being trained as black-box predictors and being placed in larger decision systems where errors in their predictions can pose immediate threat to downstream tasks. Systematic methods for calibrated uncertainty estimation under these conditions are needed, especially as these systems are deployed in safety critical domains or in settings with large dataset imbalances. When neural networks are being deployed in safety critical domains, where the ability to infer model uncertainty is crucial for eventual wide-scale adoption, precise and calibrated uncertainty estimates are useful for interpreting confidence, capturing domain shift of out-of-distribution (OOD) test samples, and recognizing when the model is likely to fail.
In this seminar, you will familiarize yourself with recent developments in the field of uncertainty estimation and evidential learning to infer model uncertainty. You will read research papers, conduct own experiments and you will discuss the insights with the other participants of the seminar.
In this seminar, you will
Each student works on a topic within the area of the seminar along with an accompanying reference paper.