Goto

Collaborating Authors

 Blilie, Anders


Finding Holes: Pathologist Level Performance Using AI for Cribriform Morphology Detection in Prostate Cancer

Szolnoky, Kelvin, Blilie, Anders, Mulliqi, Nita, Tsuzuki, Toyonori, Samaratunga, Hemamali, Titus, Matteo, Ji, Xiaoyi, Boman, Sol Erika, Gudlaugsson, Einar, Kjosavik, Svein Reidar, Asenjo, José, Gambacorta, Marcello, Libretti, Paolo, Braun, Marcin, Kordek, Radisław, Łowicki, Roman, Delahunt, Brett, Iczkowski, Kenneth A., van der Kwast, Theo, van Leenders, Geert J. L. H., Leite, Katia R. M., Pan, Chin-Chen, Janssen, Emiel Adrianus Maria, Eklund, Martin, Egevad, Lars, Kartasalo, Kimmo

arXiv.org Artificial Intelligence

Background: Cribriform morphology in prostate cancer is a histological feature that indicates poor prognosis and contraindicates active surveillance. However, it remains underreported and subject to significant interobserver variability amongst pathologists. We aimed to develop and validate an AI-based system to improve cribriform pattern detection. Methods: We created a deep learning model using an EfficientNetV2-S encoder with multiple instance learning for end-to-end whole-slide classification. The model was trained on 640 digitised prostate core needle biopsies from 430 patients, collected across three cohorts. It was validated internally (261 slides from 171 patients) and externally (266 slides, 104 patients from three independent cohorts). Internal validation cohorts included laboratories or scanners from the development set, while external cohorts used completely independent instruments and laboratories. Annotations were provided by three expert uropathologists with known high concordance. Additionally, we conducted an inter-rater analysis and compared the model's performance against nine expert uropathologists on 88 slides from the internal validation cohort. Results: The model showed strong internal validation performance (AUC: 0.97, 95% CI: 0.95-0.99; Cohen's kappa: 0.81, 95% CI: 0.72-0.89) and robust external validation (AUC: 0.90, 95% CI: 0.86-0.93; Cohen's kappa: 0.55, 95% CI: 0.45-0.64). In our inter-rater analysis, the model achieved the highest average agreement (Cohen's kappa: 0.66, 95% CI: 0.57-0.74), outperforming all nine pathologists whose Cohen's kappas ranged from 0.35 to 0.62. Conclusion: Our AI model demonstrates pathologist-level performance for cribriform morphology detection in prostate cancer. This approach could enhance diagnostic reliability, standardise reporting, and improve treatment decisions for prostate cancer patients.


Foundation Models -- A Panacea for Artificial Intelligence in Pathology?

Mulliqi, Nita, Blilie, Anders, Ji, Xiaoyi, Szolnoky, Kelvin, Olsson, Henrik, Boman, Sol Erika, Titus, Matteo, Gonzalez, Geraldine Martinez, Mielcarz, Julia Anna, Valkonen, Masi, Gudlaugsson, Einar, Kjosavik, Svein R., Asenjo, José, Gambacorta, Marcello, Libretti, Paolo, Braun, Marcin, Kordek, Radzislaw, Łowicki, Roman, Hotakainen, Kristina, Väre, Päivi, Pedersen, Bodil Ginnerup, Sørensen, Karina Dalsgaard, Ulhøi, Benedicte Parm, Ruusuvuori, Pekka, Delahunt, Brett, Samaratunga, Hemamali, Tsuzuki, Toyonori, Janssen, Emilius A. M., Egevad, Lars, Eklund, Martin, Kartasalo, Kimmo

arXiv.org Artificial Intelligence

The role of artificial intelligence (AI) in pathology has evolved from aiding diagnostics to uncovering predictive morphological patterns in whole slide images (WSIs). Recently, foundation models (FMs) leveraging self-supervised pre-training have been widely advocated as a universal solution for diverse downstream tasks. However, open questions remain about their clinical applicability and generalization advantages over end-to-end learning using task-specific (TS) models. Here, we focused on AI with clinical-grade performance for prostate cancer diagnosis and Gleason grading. We present the largest validation of AI for this task, using over 100,000 core needle biopsies from 7,342 patients across 15 sites in 11 countries. We compared two FMs with a fully end-to-end TS model in a multiple instance learning framework. Our findings challenge assumptions that FMs universally outperform TS models. While FMs demonstrated utility in data-scarce scenarios, their performance converged with - and was in some cases surpassed by - TS models when sufficient labeled training data were available. Notably, extensive task-specific training markedly reduced clinically significant misgrading, misdiagnosis of challenging morphologies, and variability across different WSI scanners. Additionally, FMs used up to 35 times more energy than the TS model, raising concerns about their sustainability. Our results underscore that while FMs offer clear advantages for rapid prototyping and research, their role as a universal solution for clinically applicable medical AI remains uncertain. For high-stakes clinical applications, rigorous validation and consideration of task-specific training remain critically important. We advocate for integrating the strengths of FMs and end-to-end learning to achieve robust and resource-efficient AI pathology solutions fit for clinical use.


Physical Color Calibration of Digital Pathology Scanners for Robust Artificial Intelligence Assisted Cancer Diagnosis

Ji, Xiaoyi, Salmon, Richard, Mulliqi, Nita, Khan, Umair, Wang, Yinxi, Blilie, Anders, Olsson, Henrik, Pedersen, Bodil Ginnerup, Sørensen, Karina Dalsgaard, Ulhøi, Benedicte Parm, Kjosavik, Svein R, Janssen, Emilius AM, Rantalainen, Mattias, Egevad, Lars, Ruusuvuori, Pekka, Eklund, Martin, Kartasalo, Kimmo

arXiv.org Artificial Intelligence

The potential of artificial intelligence (AI) in digital pathology is limited by technical inconsistencies in the production of whole slide images (WSIs), leading to degraded AI performance and posing a challenge for widespread clinical application as fine-tuning algorithms for each new site is impractical. Changes in the imaging workflow can also lead to compromised diagnoses and patient safety risks. We evaluated whether physical color calibration of scanners can standardize WSI appearance and enable robust AI performance. We employed a color calibration slide in four different laboratories and evaluated its impact on the performance of an AI system for prostate cancer diagnosis on 1,161 WSIs. Color standardization resulted in consistently improved AI model calibration and significant improvements in Gleason grading performance. The study demonstrates that physical color calibration provides a potential solution to the variation introduced by different scanners, making AI-based cancer diagnostics more reliable and applicable in clinical settings.