Ghelichkhan, Elham
A Comparison of Object Detection and Phrase Grounding Models in Chest X-ray Abnormality Localization using Eye-tracking Data
Ghelichkhan, Elham, Tasdizen, Tolga
ABSTRACT Chest diseases rank among the most prevalent and dangerous global health issues. Object detection and phrase groundin g deep learning models interpret complex radiology data to as - sist healthcare professionals in diagnosis. Object detect ion locates abnormalities for classes, while phrase grounding locates abnormalities for textual descriptions. This paper i nves-tigates how text enhances abnormality localization in ches t X-rays by comparing the performance and explainability of these two tasks. To establish an explainability benchmark, we proposed an automatic pipeline to generate image regions for report sentences using radiologists' eye-tracking dat a Index T erms -- Multi-Modal Learning, Localization, Eye-tracking Data, Data Generation, XAI 1. INTRODUCTION Since the emergence of deep neural networks (DNN), they have been applied to various medical domains and applications.
DISC: Latent Diffusion Models with Self-Distillation from Separated Conditions for Prostate Cancer Grading
Ho, Man M., Ghelichkhan, Elham, Chong, Yosep, Zhou, Yufei, Knudsen, Beatrice, Tasdizen, Tolga
Latent Diffusion Models (LDMs) can generate high-fidelity images from noise, offering a promising approach for augmenting histopathology images for training cancer grading models. While previous works successfully generated high-fidelity histopathology images using LDMs, the generation of image tiles to improve prostate cancer grading has not yet been explored. Additionally, LDMs face challenges in accurately generating admixtures of multiple cancer grades in a tile when conditioned by a tile mask. In this study, we train specific LDMs to generate synthetic tiles that contain multiple Gleason Grades (GGs) by leveraging pixel-wise annotations in input tiles. We introduce a novel framework named Self-Distillation from Separated Conditions (DISC) that generates GG patterns guided by GG masks. Finally, we deploy a training framework for pixel-level and slide-level prostate cancer grading, where synthetic tiles are effectively utilized to improve the cancer grading performance of existing models. As a result, this work surpasses previous works in two domains: 1) our LDMs enhanced with DISC produce more accurate tiles in terms of GG patterns, and 2) our training scheme, incorporating synthetic data, significantly improves the generalization of the baseline model for prostate cancer grading, particularly in challenging cases of rare GG5, demonstrating the potential of generative models to enhance cancer grading when data is limited.