SPADE: Spatial Transcriptomics and Pathology Alignment Using a Mixture of Data Experts for an Expressive Latent Space

Redekop, Ekaterina, Pleasure, Mara, Wang, Zichen, Flores, Kimberly, Sisk, Anthony, Speier, William, Arnold, Corey W.

arXiv.org Artificial Intelligence 

The rapid growth of digital pathology and advances in self-supervised deep learning have enabled the development of foundational models for various pathology tasks across diverse diseases. While multimodal approaches integrating diverse data sources have emerged, a critical gap remains in the comprehensive integration of whole-slide images (WSIs) with spatial tran-scriptomics (ST), which is crucial for capturing critical molecular heterogeneity beyond standard hematoxylin & eosin (H&E) staining. We introduce SPADE, a foundation model that integrates histopathology with ST data to guide image representation learning within a unified framework, in effect creating an ST-informed latent space. These authors contributed equally to this work. Pre-trained on the comprehensive HEST-1k dataset, SPADE is evaluated on 20 downstream tasks, demonstrating significantly superior few-shot performance compared to baseline models, highlighting the benefits of integrating morphological and molecular information into one latent space. Introduction High-resolution whole slide images (WSIs) have propelled the development of powerful deep learning foundation models in computational pathology, demonstrating robust performance across diverse tissue types and tasks [1, 2, 3, 4]. These models are typically trained using self-supervision, enabling learning from large unlabeled datasets and producing embeddings robust to institutional variations, including differences in staining procedures and other image-quality factors [5, 6, 7, 8]. By visually capturing cellular arrangement, WSIs enable the study of spatial organization and disorganization of cells in tissues, characterizations that are especially relevant in cancer research [9, 10]. In clinical settings, WSIs are commonly stained with hematoxylin & eosin (H&E), a two-color stain that highlights nuclei and cytoplasm but offers a limited view of molecular-level heterogeneity [11]. As tumor tissues are known to exhibit high variability within and across patients, deciphering the heterogeneity at the molecular level is critical for improving deep learning applications that can more precisely inform diagnosis, treatment, and prognosis [12, 13]. While H&E provides crucial morphological insights, its inability to capture molecular heterogeneity limits its utility in fully characterizing tissue complexity. Spatial transcriptomics addresses this gap by providing spatially resolved gene expression data, allowing for additional molecular context for a given tissue specimen. Although both ST and H&E data have independently proven useful in various applications, their combined potential for creating a more comprehensive representation learning framework remains unexplored. To this end, we introduce SPADE, a vision-ST foundation model that uses a mixture of experts, each trained via contrastive learning, to unify ST data and H&E images to produce slide representations that encompass both modalities.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found