Multimodal Metadata Assignment for Cultural Heritage Artifacts

Rei, Luis, Mladenić, Dunja, Dorozynski, Mareike, Rottensteiner, Franz, Schleider, Thomas, Troncy, Raphaël, Lozano, Jorge Sebastián, Salvatella, Mar Gaitán

arXiv.org Artificial Intelligence 

We develop a multimodal classifier for the cultural heritage domain using a late fusion approach and introduce a novel dataset. The three modalities are Image, Text, and Tabular data. We based the image classifier on a ResNet convolutional neural network architecture and the text classifier on a multilingual transformer architecture (XML-Roberta). Both are trained as multitask classifiers and use the focal loss to handle class imbalance. Tabular data and late fusion are handled by Gradient Tree Boosting. We also show how we leveraged specific data models and taxonomy in a Knowledge Graph to create the dataset and to store classification results. All individual classifiers accurately predict missing properties in the digitized silk artifacts, with the multimodal approach providing the best results.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found