Large Language Models (LLMs) like ChatGPT have been among the most omnipresent advances in the field of Artificial Intelligence in the past years. Despite their potential and versatility, they have also exposed some drawbacks, such as hallucinations, intransparency, and non-factuality. Knowledge Graphs (KGs), on the hand, are a main driver of white box AI, which are known to be transparent and factual.
In this seminar, we will look at works at the cross-roads between knowledge graphs and large language models, which aim at combining the best of both worlds. Areas of interest include, but are not limited to:
In this seminar, you will familiarize yourself with approaches to knowledge graph construction. You will read research papers, specifications, and tool descriptions, as well as conduct own experiments where applicable, and you will discuss the insights with the other participants of the seminar.
As a participant, you are supposed to introduce a particular technique for knowledge graph construction and present it to the seminar participants. Each seminar paper undergoes a peer review process in the seminar. Presentations are supposed to be about 25 minutes long.
This seminar is organized by Prof. Dr. Heiko Paulheim
Available for Master students (2 SWS, 4 ECTS)
Prerequisites: Basic prior knowledge in knowledge graphs (e.g., by attending IE650) and LLMs
Additional resources:
The following paper may serve as an entry point to the seminar topic:
Note that these are two different articles published in the same year by a different group of authors, where in both cases, the respective first author has the last name “Pan” (but they are two different persons).
To shape your topic, you may use the following list of papers: