Two Posters about Product Matching using Deep Learning accepted at WWW2022 Conference

We are happy to announce that two posters about product matching using deep learning have been accepted at ACM TheWebConf 2022 (WWW2022). The conference takes place 25 – 29 April 2022. Below, you find the titles and abstracts of both posters as well as links to the arxiv pre-prints.


1. Supervised Contrastive Learning for Product Matching 

Contrastive learning has moved the state of the art for many tasks in computer vision and information retrieval in recent years. This poster is the first work that applies supervised contrastive learning to the task of product matching in e-commerce using product offers from different e-shops. More specifically, we employ a supervised contrastive learning technique to pre-train a Transformer encoder which is afterward fine-tuned for the matching task using pair-wise training data. We further propose a source-aware sampling strategy that enables contrastive learning to be applied for use cases in which the training data does not contain product identifiers. We show that applying supervised contrastive pre-training in combination with source-aware sampling significantly improves the state-of-the-art performance on several widely used benchmarks: For Abt-Buy, we reach an F1-score of 94.29 (+3.24 compared to the previous state-of-the-art), for Amazon-Google 79.28 (+ 3.7). For WDC Computers datasets, we reach improvements between +0.8 and +8.84 in F1-score depending on the training set size. Further experiments with data augmentation and self-supervised contrastive pre-training show that the former can be helpful for smaller training sets while the latter leads to a significant decline in performance due to inherent label noise. We thus conclude that contrastive pretraining has a high potential for product matching use cases in which explicit supervision is available.

Link to arxiv pre-print

2. Cross-Language Learning for Product Matching

Transformer-based matching methods have significantly moved the state-of-the-art for less-structured matching tasks such as matching product offers in e-commerce. In order to excel in these tasks, Transformer-based matching methods require a decent amount of training pairs. Providing enough training data can be challenging, especially if a matcher for non-English product descriptions should be learned. This poster explores the use case of matching product offers from different e-shops to which extent it is possible to improve the performance of Transformer-based entity matchers by complementing a small set of training pairs in the target language, German in our case, with a larger set of English-language training pairs. Our experiments using different Transformers show that extending the German set with English pairs improves the matching performance in all cases. The impact of adding the English pairs is especially high in low-resource settings in which only a rather small number of non-English pairs is available. As it is often possible to automatically gather English training pairs from the Web by using annotations, our results are relevant for many product matching scenarios targeting low-resource languages.

Link to arxiv pre-print