LANISTR: Multimodal Learning from Structured and Unstructured Data
Ebrahimi, Sayna, Arik, Sercan O., Dong, Yihe, Pfister, Tomas
–arXiv.org Artificial Intelligence
Multimodal large-scale pretraining has shown impressive performance for unstructured data including language, image, audio, and video. However, a prevalent real-world scenario involves the combination of structured data types (tabular, time-series) with unstructured data which has so far been understudied. To bridge this gap, we propose LANISTR, an attention-based framework to learn from LANguage, Image, and STRuctured data. The core of LANISTR's methodology is rooted in \textit{masking-based} training applied across both unimodal and multimodal levels. In particular, we introduce a new similarity-based multimodal masking loss that enables it to learn cross-modal relations from large-scale multimodal data with missing modalities. On two real-world datastes, MIMIC-IV (healthcare) and Amazon Product Review (retail), LANISTR demonstrates remarkable absolute improvements of 6.6\% (AUROC) and up to 14\% (accuracy) when fine-tuned on 0.1\% and 0.01\% of labeled data, respectively, compared to the state-of-the-art alternatives. Notably, these improvements are observed even in the presence of considerable missingness ratios of 35.7\% and 99.8\%, in the respective datasets.
arXiv.org Artificial Intelligence
Aug-23-2023
- Genre:
- Research Report (0.64)
- Industry:
- Health & Medicine > Therapeutic Area (0.46)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning > Neural Networks
- Deep Learning (1.00)
- Natural Language > Large Language Model (0.93)
- Representation & Reasoning (0.93)
- Vision (0.94)
- Machine Learning > Neural Networks
- Data Science > Data Mining (0.93)
- Information Management (1.00)
- Artificial Intelligence
- Information Technology