DE

Event

Seminar Representation Learning on Knowledge Graphs (Master) [WS212513605]

Type
seminar (S)
Präsenz/Online gemischt
Term
WS 21/22
SWS
2
Language
Englisch
Appointments
15
Links
ILIAS

Lecturers

Organisation

  • Information Service Engineering

Part of

Appointments

  • 20.10.2021 12:00 - 13:30 - Room: 05.20 5A-09
  • 27.10.2021 12:00 - 13:30 - Room: 05.20 5A-09
  • 03.11.2021 12:00 - 13:30 - Room: 05.20 5A-09
  • 10.11.2021 12:00 - 13:30 - Room: 05.20 5A-09
  • 17.11.2021 12:00 - 13:30 - Room: 05.20 5A-09
  • 24.11.2021 12:00 - 13:30 - Room: 05.20 5A-09
  • 01.12.2021 12:00 - 13:30 - Room: 05.20 5A-09
  • 08.12.2021 12:00 - 13:30 - Room: 05.20 5A-09
  • 15.12.2021 12:00 - 13:30 - Room: 05.20 5A-09
  • 22.12.2021 12:00 - 13:30 - Room: 05.20 5A-09
  • 12.01.2022 12:00 - 13:30 - Room: 05.20 5A-09
  • 19.01.2022 12:00 - 13:30 - Room: 05.20 5A-09
  • 26.01.2022 12:00 - 13:30 - Room: 05.20 5A-09
  • 02.02.2022 12:00 - 13:30 - Room: 05.20 5A-09
  • 09.02.2022 12:00 - 13:30 - Room: 05.20 5A-09

Note

Data representation or feature representation plays a key role in the performance of machine learning algorithms. In recent years, rapid growth has been observed in Representation Learning (RL) of words and Knowledge Graphs (KG) into low dimensional vector spaces and its applications to many real-world scenarios. Word embeddings are a low dimensional vector representation of words that are capable of capturing the context of a word in a document, semantic similarity as well as its relation with other words. Similarly, KG embeddings are a low dimensional vector representation of entities and relations from a KG preserving its inherent structure and capturing the semantic similarity between the entities. 

KG representation learning algorithms (a.k.a. KG embedding models ) could be either unimodal where a single source is used or multimodal where multiple sources are explored. The sources of information could be relations between entities, text literals, numeric literals, images, and etc. It is important to capture the information present in each of these sources in order to learn representations which are rich in semantics.  Multimodal KG embeddings learn either multiple representations simultaneously based on each source of information in a non-unified space or learn a single representation for each element of the KG in a unified space. Representation of entities and relations learnt using both unimodal and multimodal KG  embedding models could be used in various downstream applications such as clustering, classification, and so on. On the other hand, language models such as BERT, ELMo, GPT, etc. learn the probability of word occurrence based on text corpus and learn representation of words in a low-dimensional embedding space. Representation of the words generated by the language models are often used for various KG completion tasks such as link prediction, entity classification, and so on. 

In this seminar, we would like to study the different state of the art algorithms for multimodal embeddings, applications of KG embeddings, or the use of language models for KG representation. 

Contributions of the students:

Each student will be assigned 1 paper on the topic. The student will have to

  1. give a seminar presentation,
  2. write a seminar report paper of 15 pages explaining the method from the assigned paper, in their own words, and 
  3. implementation. If code is available from the authors, then re-implementation of it for small scale experiments using Google Colab or make it available via GitHub.