InES Seminar Spring Semester 2022

  • Course ID:
    • B.Sc. Wirtschaftsinformatik: SM 456
    • M.Sc. Wirtschaftsinformatik: IE 704
    • MMDS: IE 704
  • Credit Points:
    • B.Sc. Wirtschaftsinformatik: 5 ECTS
    • M.Sc. Wirtschaftsinformatik: 4 ECTS
    • MMDS: 4 ECTS
    • Supervision: Dr. Christian Bartelt

Schedule

  • Until Tuesday, February 01, 2022 (23:59 CET): Please register for the kick-off meeting by sending a list of your completed courses (Transcript of Records, CV optional) via mail to Nils Wilken (nils.wilkenmail-uni-mannheim.de)
  • Wednesday, February 02, 2022: As we can only offer a limited number of places, you will be informed whether you can participate in this seminar.
  • Thursday, February 03, 2022: Latest possible drop-out date without a penalty (A drop-out after this date will be graded with 5.0)
  • Monday, February 07, 2022 (preliminary): Milestone 1 – Kick-Off Meeting (digital meeting)
  • Sunday, May 08, 2022 (23:59 CEST): Milestone 2 – Submission of final seminar paper
  • Sunday, May 15, 2022 (23:59 CEST): Milestone 3 – Submission of reviews
  • Monday, May 16, 2022 – Friday, May 27, 2022: Milestone 4 – Presentation of your seminar paper
  • Sunday, June 12, 2022 (23:59 CEST): Milestone 5 – Submission of camera-ready seminar paper and document that indicates the differences between the first submitted version and the camera-ready version of the seminar paper

Important Notes

  • Missing a mile-stone will result in a final grade of 5.0.
  • The four parts final paper version, camera-ready paper version, feedback (reviews + presentation feedback), and presentation will all be graded separately, where each part counts 25% of the final grade.
  • This seminar is open for Bachelor and Master Students focusing on “Business Informatics” and “Data Science”. Master students enrolled in the “Mannheim Master in Data Science” are also highly welcome to apply for this seminar.

Suggested Topics

  • TOPIC 1: Variational Autoencoders for heterogenous data

    Introduction:
    Generative modeling has various practical applications such as data augmentation, handling missing values, detecting outliers, or creating artificial samples to improve data security. Variational autoencoders (VAEs) are deep generative latent variable models that can successfully capture the hidden structure of a dataset, making them useful for various generative modeling applications. However, in their original form, they are not suitable for heterogeneous, mixed-type, tabular data with categorical and continuous features.

    Goal and Objective:
    The goal of this seminar is to become familiar with the variational autoencoder as a generative modeling method and to gain knowledge on how to design the loss function of variational autoencoders to adapt to learning complex heterogeneous distributions. The expected outcome is a written report describing the variational autoencoder along with existing variations for heterogeneous data. In addition, further variations of VAE, e.g. for (disentangled) representation learning, should be evaluated with respect to their applicability to heterogeneous tabular data.

    Starting Papers

    • Nazabal, A., Olmos, P. M., Ghahramani, Z., & Valera, I. (2020). Handling incomplete heterogeneous data using vaes. Pattern Recognition, 107, 107501.
    • Ma, C., Tschiatschek, S., Hernández-Lobato, J. M., Turner, R., & Zhang, C. (2020). VAEM: a deep generative model for heterogeneous mixed type data. arXiv preprint arXiv:2006.11941.
    • Antelmi, L., Ayache, N., Robert, P., & Lorenzi, M. (2019, May). Sparse multi-channel variational autoencoder for the joint analysis of heterogeneous data. In International Conference on Machine Learning (pp. 302–311). PMLR.
    • Wei, R., Garcia, C., El-Sayed, A., Peterson, V., & Mahmood, A. (2020). Variations in variational autoencoders-a comparative evaluation. Ieee Access, 8, 153651-153670.
  • TOPIC 2: Multimodal Learning with Images and Tabular Data

    Introduction:
    Classical Deep Learning (DL) problems, such as object recognition or document classification, usually involve one modality, e.g., image or text. Multimodal learning is a subfield of AI that deals with solving problems that involve multiple modalities. Real-world data is currently often organized in relational databases and used for ML in the form of tabular datasets. In some domains, such as healthcare, it is useful integrate image data together with tabular data in a multimodal approach to improve the performance of (supervised) models.

    Goal and Objective:
    While multimodal learning with modalities such as text and speech has been widely explored, methods that integrate unstructured image data and structured tabular data are still under-researched in the ML community. The goal of this seminar is to provide an overview of the state of the art in combining image and tabular data for supervised learning tasks. The expected outcome is a research paper highlighting relevant existing approaches and their respective advantages and disadvantages, as well as own ideas on how image and tabular data can be used together.

    Starting Papers:

    • Gessert, N., Nielsen, M., Shaikh, M., Werner, R., & Schlaefer, A. (2020). Skin lesion classification using ensembles of multi-resolution EfficientNets with meta data. MethodsX, 7, 100864.
    • Silva, L. A. V., & Rohr, K. (2020, April). Pan-cancer prognosis prediction using multimodal deep learning. In 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI) (pp. 568–571). IEEE.
    • Sotiroudis, S. P., Sarigiannidis, P., Goudos, S. K., & Siakavara, K. (2021). Fusing diverse input modalities for path loss prediction: A deep learning approach. IEEE Access, 9, 30441-30451.
  • TOPIC 3: Machine learning as a decision support tool for acute stroke treatment

    Introduction:
    Stroke is one of the leading causes of death worldwide, and with demographic changes, the number of strokes is expected to increase, creating new challenges for healthcare. Since time is the most important factor for successful stroke treatment, quick decisions are critical for survival and a good prognosis for recovery. Machine learning, especially Deep Learning, has shown promise in automating and thus accelerating decisions in the treatment process of stroke patients.

    Goal and Objective:
    The goal of this seminar is to evaluate the state of the art in decision support in the diagnosis and acute treatment of stroke from the perspective of a ML practitioner. On the medical domain side, the goal is to become familiar with different stroke types, treatment options, and decisions in the stroke management process. On the technical side, which is the focus of the seminar, it is about getting familiar with data (e.g. CT, MRI or clinical data), algorithms (shallow ML and DL) and assessment metrics (standard ML metrics and clinical metrics).

    Starting Papers

    • Kamal, H., Lopez, V., & Sheth, S. A. (2018). Machine learning in acute ischemic stroke neuroimaging. Frontiers in neurology, 9, 945.
    • Sirsat, M. S., Fermé, E., & Câmara, J. (2020). Machine learning for brain stroke: a review. Journal of Stroke and Cerebrovascular Diseases, 29(10), 105162.
    • Nielsen, A., Hansen, M. B., Tietze, A., & Mouridsen, K. (2018). Prediction of tissue outcome and assessment of treatment effect in acute ischemic stroke using deep learning. Stroke, 49(6), 1394-1401.
  • TOPIC 4: External Memory in Neural Network Architecture

    Introduction:
    The architectures of artificial neural networks are remarkable adept at pattern recognition and quick, reactive decision making, but are limited in their capabilities to represent and store data over long periods of time as well as reasoning using knowledge. Therefore, the use of external memory components, analogous to the random-access memory in a conventional computer, is a promising research direction to allow artificial neural networks to emulate reasoning and solve inference problems.

    Goal and Objective:
    In this seminar, we will evaluate the state of the art of artificial neural networks architectures that incorporate external memory components and their applications in different machine learning domains. In this context, you will review and summarize current research papers about applications of different memory architectures (e.g. sequential, random access) in artificial neural networks and their learning algorithms.

    Starting Papers

    • Graves A et al. Hybrid computing using a neural network with dynamic external memory. Nature. 2016
  • TOPIC 5: Data-Driven Approaches for GUI Prototyping

    Introduction:
    Data-driven approaches for reducing manual effort and increasing automation have gained popularity for various application areas over the past years. Current research started to adopt such approaches for supporting different tasks in software engineering e.g., automatic method and commit message generation or semantic code retrieval approaches, among many others. Recently, researchers started to propose data-driven approaches to support users during GUI prototyping and therefore reduce required time, effort, and skills to create GUI prototypes. The goal of this seminar works is to provide a clear overview and discussion of data-driven approaches that provide assistance to users for GUI prototyping in various ways. In particular, the thesis should distinguish between approaches that provide support for GUI prototyping in the requirements elicitation phase and approaches that provide support for the final GUI design.

    Goal and Objective:
    Overview and discussion of different data-driven state-of-the-art approaches that provide assistance to users for GUI prototyping in various ways.

    Starting Papers

    • Lee, C., Kim, S., Han, D., Yang, H., Park, Y.W., Kwon, B.C., & Ko, S. (2020, April). GUIComp: A GUI Design Assistant with Real-Time, Multi-Faceted Feedback. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–13).
    • Kolthoff, K., Bartelt, C., & Ponzetto, S. P. (2020, September). GUI2WiRe: Rapid wireframing with a mined and large-scale GUI repository using natural language requirements. In 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE) (pp. 1297-1301). IEEE.
  • TOPIC 6: Explainable Outlier Detection

    Introduction:

    Outlier detection (also called anomaly detection or novelty detection) is the task of identifying data instances (samples) that deviate substantially from the majority of the data. Outlier detection is relevant for tasks like network security or analysis of medical health records. Classical methods include Isolation Trees or One-Class Support Vector Machines, but recent research has focused on methods based on deep neural networks.

    For many domains, it is not sufficient to only identify outliers, but it is also relevant to give users an understanding of why a sample is considered an outlier.    For example, consider the case of medical health records: When a sample is classified as an outlier, we also want to provide the physician with information about what makes the sample an outlier, indicating possible medical conditions that require treatment.

    The area of explainable outlier detection is relatively new, and there does not seem to be a consensus about what constitutes a good explanation for this domain.

    Goal of this seminar is to explore the area of explainable outlier detection. Specifically, the seminar should provide the following outcomes:

    • Provide a brief introduction into outlier detection
    • Give a brief overview of classical and deep learning-based outlier detection methods
    • Categorize types of explanations for outliers, and corresponding methods for generating such explanations
    • Discuss methods for evaluating the quality of outlier explanations
    • Present and discuss at least one methods for generating explanations for outliers in detail

    The seminar consists of a written report as well as a presentation that should both cover the aspects mentioned above.

    Starting Papers

    • Pang, Guansong, et al. “Deep learning for anomaly detection: A review.” ACM Computing Surveys (CSUR) 54.2 (2021): 1–38.
    • Pimentel, Marco AF, et al. “A review of novelty detection.” Signal Processing 99 (2014): 215–249.
    • Amarasinghe, Kasun, Kevin Kenney, and Milos Manic. “Toward explainable deep neural network based anomaly detection.”11th International Conference on Human System Interaction (HSI). IEEE, 2018.
    • Hongzuo Xu, Yijie Wang, Songlei Jian, Zhenyu Huang, Yongjun Wang, Ning Liu, and Fei Li. Beyond outlier detection: Outlier interpretation by attention-guided triplet deviation network. In Proceedings of the Web Conference 2021, pages 1328–1339, 2021.
  • TOPIC 7: Curriculum Learning for Neural Networks

    Introduction:

    The usual training process of neural networks involves a sequence of uniform mini-batches sampled at random from the entire training data set. In consequence, the random selection of the training examples affects the speed of convergence of the training process, and, in the case of non-convex criteria, the quality of the local minima obtained. The field of curriculum learning, which is inspired by the human learning process, aims to organize the training examples in a meaningful order which introduces gradually more complex concepts, to improve on the traditional random scheme.

    In this seminar, we will evaluate curriculum learning strategies for neural networks and their applications in different machine learning domains. In this context, the students will review and summarize current research papers about applications of curriculum learning, explore whether artificial neural networks can benefit from a curriculum learning strategy, and discuss the general principles that make some curriculum strategies work better than others.

    Starting Papers

    • Elman, Jeffrey. (1993). Learning and Development in Neural Networks: The Importance of Starting Small.
    • Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. (2009). Curriculum Learning.
    • Graves, A., Bellemare, M. G., Menick, J., Munos R., & Kavukcuoglu, K. (2017). Automated Curriculum Learning for Neural Networks.

Contact

Nils Wilken

Nils Wilken

Research Assistant
University of Mannheim
Institute for Enterprise Systems
L15, 1–6 – Room 416
68161 Mannheim