Goto

Collaborating Authors

 mammogram


AI-assisted mammograms cut risk of developing aggressive breast cancer

New Scientist

People who are screened for breast cancer by AI-supported radiologists are less likely to develop aggressive cancers before their next screening round than those who are screened by radiologists alone, raising hopes that AI-assisted screening could save lives. "This is the first randomised controlled trial on the use of AI in mammography screening," says Kristina Lång at Lund University in Sweden. The AI-supported approach involves using the software - which has been trained on more than 200,000 mammography scans from 10 countries - to rank the likelihood of cancer being present in mammograms on a scale of 1 to 10, based on visual patterns in the scans. The scans receiving a score of 1 to 9 are then assessed by one experienced radiologist, while scans receiving a score of 10 - indicating cancer is most likely to be present - are assessed by two experienced radiologists. An earlier study found that this approach could detect 29 per cent more cancers than standard screening, where each mammogram is assessed by two radiologists, without increasing the rate of false detections - where a growth is flagged but follow-up tests reveal it isn't actually there or wouldn't go on to cause problems.


Analysis of Incursive Breast Cancer in Mammograms Using YOLO, Explainability, and Domain Adaptation

Adhikari, Jayan, Joshi, Prativa, Baral, Susish

arXiv.org Artificial Intelligence

Abstract--Deep learning models for breast cancer detection from mammographic images have significant reliability problems when presented with Out-of-Distribution (OOD) inputs such as other imaging modalities (CT, MRI, X-ray) or equipment variations, leading to unreliable detection and misdiagnosis. Our strategy establishes an in-domain gallery via cosine similarity to rigidly reject non-mammographic inputs prior to processing, ensuring that only domain-associated images supply the detection pipeline. The OOD detection component achieves 99.77% general accuracy with immaculate 100% accuracy on OOD test sets, effectively eliminating irrelevant imaging modalities. ResNet50 was selected as the optimum backbone after 12 CNN architecture searches. The joint framework unites OOD robustness with high detection performance (mAP@0.5: Experimental validation establishes that OOD filtering significantly improves system reliability by preventing false alarms on out-of-distribution inputs while maintaining higher detection accuracy on mammographic data. The present study offers a fundamental foundation for the deployment of reliable AI-based breast cancer detection systems in diverse clinical environments with inherent data heterogeneity. A global health concern, breast cancer is the second-highest cause of cancer related to mortality in women. It has been recorded as the most diagnosed disease in the world in 2020 [1]. According to the World Health Organization, all types of cancer account for 626700 global deaths of women, out of which the breast is the predominant and second leading cause [2]. If diagnosed in its early development stage, the survival rate are likely to be high and the treatment cost will get reduced [3]. Studies has found that 30% breast cancer are diagnosed when the size of the mass is 30mm.


MammoRGB: Dual-View Mammogram Synthesis Using Denoising Diffusion Probabilistic Models

Garza-Abdala, Jorge Alberto, Fumagal-González, Gerardo A., Avendano, Daly, Cardona, Servando, Hussain, Sadam, de Avila-Armenta, Eduardo, Toscano-Martínez, Jasiel H., Gurmendi, Diana S. M. Rosales, Pedro-Pérez, Alma A., Tamez-Pena, Jose Gerardo

arXiv.org Artificial Intelligence

Purpose: This study aims to develop and evaluate a three channel denoising diffusion probabilistic model (DDPM) for synthesizing single breast dual view mammograms and to assess the impact of channel representations on image fidelity and cross view consistency. Materials and Methods: A pretrained three channel DDPM, sourced from Hugging Face, was fine tuned on a private dataset of 11020 screening mammograms to generate paired craniocaudal (CC) and mediolateral oblique (MLO) views. Three third channel encodings of the CC and MLO views were evaluated: sum, absolute difference, and zero channel. Each model produced 500 synthetic image pairs. Quantitative assessment involved breast mask segmentation using Intersection over Union (IoU) and Dice Similarity Coefficient (DSC), with distributional comparisons against 2500 real pairs using Earth Movers Distance (EMD) and Kolmogorov Smirnov (KS) tests. Qualitative evaluation included a visual Turing test by a non expert radiologist to assess cross view consistency and artifacts. Results: Synthetic mammograms showed IoU and DSC distributions comparable to real images, with EMD and KS values (0.020 and 0.077 respectively). Models using sum or absolute difference encodings outperformed others in IoU and DSC (p < 0.001), though distributions remained broadly similar. Generated CC and MLO views maintained cross view consistency, with 6 to 8 percent of synthetic images exhibiting artifacts consistent with those in the training data. Conclusion: Three channel DDPMs can generate realistic and anatomically consistent dual view mammograms with promising applications in dataset augmentation.


Breast density in MRI: an AI-based quantification and relationship to assessment in mammography

Chen, Yaqian, Li, Lin, Gu, Hanxue, Dong, Haoyu, Nguyen, Derek L., Kirk, Allan D., Mazurowski, Maciej A., Hwang, E. Shelley

arXiv.org Artificial Intelligence

Mammographic breast density is a well-established risk factor for breast cancer. Recently there has been interest in breast MRI as an adjunct to mammography, as this modality provides an orthogonal and highly quantitative assessment of breast tissue. However, its 3D nature poses analytic challenges related to delineating and aggregating complex structures across slices. Here, we applied an in-house machine-learning algorithm to assess breast density on normal breasts in three MRI datasets. Breast density was consistent across different datasets (0.104 - 0.114). Analysis across different age groups also demonstrated strong consistency across datasets and confirmed a trend of decreasing density with age as reported in previous studies. MR breast density was correlated with mammographic breast density, although some notable differences suggest that certain breast density components are captured only on MRI. Future work will determine how to integrate MR breast density with current tools to improve future breast cancer risk prediction.


Evaluating Generative AI as an Educational Tool for Radiology Resident Report Drafting

Verdone, Antonio, Cardall, Aidan, Siddiqui, Fardeen, Nashawaty, Motaz, Rigau, Danielle, Kwon, Youngjoon, Yousef, Mira, Patel, Shalin, Kieturakis, Alex, Kim, Eric, Heacock, Laura, Reig, Beatriu, Shen, Yiqiu

arXiv.org Artificial Intelligence

Objective: Radiology residents require timely, personalized feedback to develop accurate image analysis and reporting skills. Increasing clinical workload often limits attendings' ability to provide guidance. This study evaluates a HIPAA-compliant GPT-4o system that delivers automated feedback on breast imaging reports drafted by residents in real clinical settings. Methods: We analyzed 5,000 resident-attending report pairs from routine practice at a multi-site U.S. health system. GPT-4o was prompted with clinical instructions to identify common errors and provide feedback. A reader study using 100 report pairs was conducted. Four attending radiologists and four residents independently reviewed each pair, determined whether predefined error types were present, and rated GPT-4o's feedback as helpful or not. Agreement between GPT and readers was assessed using percent match. Inter-reader reliability was measured with Krippendorff's alpha. Educational value was measured as the proportion of cases rated helpful. Results: Three common error types were identified: (1) omission or addition of key findings, (2) incorrect use or omission of technical descriptors, and (3) final assessment inconsistent with findings. GPT-4o showed strong agreement with attending consensus: 90.5%, 78.3%, and 90.4% across error types. Inter-reader reliability showed moderate variability (α = 0.767, 0.595, 0.567), and replacing a human reader with GPT-4o did not significantly affect agreement (Δ = -0.004 to 0.002). GPT's feedback was rated helpful in most cases: 89.8%, 83.0%, and 92.0%. Discussion: ChatGPT-4o can reliably identify key educational errors. It may serve as a scalable tool to support radiology education.


Breast Cancer VLMs: Clinically Practical Vision-Language Train-Inference Models

Zheng, Shunjie-Fabian, Lee, Hyeonjun, Kooi, Thijs, Diba, Ali

arXiv.org Artificial Intelligence

Breast cancer remains the most commonly diagnosed malignancy among women in the developed world. Early detection through mammography screening plays a pivotal role in reducing mortality rates. While computer-aided diagnosis (CAD) systems have shown promise in assisting radiologists, existing approaches face critical limitations in clinical deployment - particularly in handling the nuanced interpretation of multi-modal data and feasibility due to the requirement of prior clinical history. This study introduces a novel framework that synergistically combines visual features from 2D mammograms with structured textual descriptors derived from easily accessible clinical metadata and synthesized radiological reports through innovative to-kenization modules. Our proposed methods in this study demonstrate that strategic integration of convolutional neural networks (ConvNets) with language representations achieves superior performance to vision transformer-based models while handling high-resolution images and enabling practical deployment across diverse populations. By evaluating it on multi-national cohort screening mammograms, our multi-modal approach achieves superior performance in cancer detection and calcification identification compared to unimodal baselines, with particular improvements. The proposed method establishes a new paradigm for developing clinically viable VLM-based CAD systems that effectively leverage imaging data and contextual patient information through effective fusion mechanisms.


MammoDINO: Anatomically Aware Self-Supervision for Mammographic Images

Zhou, Sicheng, Wu, Lei, Xiao, Cao, Bhatia, Parminder, Kass-Hout, Taha

arXiv.org Artificial Intelligence

Self-supervised learning (SSL) has transformed vision encoder training in general domains but remains underutilized in medical imaging due to limited data and domain specific biases. We present MammoDINO, a novel SSL framework for mammography, pretrained on 1.4 million mammographic images. To capture clinically meaningful features, we introduce a breast tissue aware data augmentation sampler for both image-level and patch-level supervision and a cross-slice contrastive learning objective that leverages 3D digital breast tomosynthesis (DBT) structure into 2D pretraining. MammoDINO achieves state-of-the-art performance on multiple breast cancer screening tasks and generalizes well across five benchmark datasets. It offers a scalable, annotation-free foundation for multipurpose computer-aided diagnosis (CAD) tools for mammogram, helping reduce radiologists' workload and improve diagnostic efficiency in breast cancer screening.


The LongiMam model for improved breast cancer risk prediction using longitudinal mammograms

Rakez, Manel, Louis, Thomas, Guillaumin, Julien, Chamming's, Foucauld, Fillard, Pierre, Amadeo, Brice, Rondeau, Virginie

arXiv.org Artificial Intelligence

Risk-adapted breast cancer screening requires robust models that leverage longitudinal imaging data. Most current deep learning models use single or limited prior mammograms and lack adaptation for real-world settings marked by imbalanced outcome distribution and heterogeneous follow-up. We developed LongiMam, an end-to-end deep learning model that integrates both current and up to four prior mammograms. LongiMam combines a convolutional and a recurrent neural network to capture spatial and temporal patterns predictive of breast cancer. The model was trained and evaluated using a large, population-based screening dataset with disproportionate case-to-control ratio typical of clinical screening. Across several scenarios that varied in the number and composition of prior exams, LongiMam consistently improved prediction when prior mammograms were included. The addition of prior and current visits outperformed single-visit models, while priors alone performed less well, highlighting the importance of combining historical and recent information. Subgroup analyses confirmed the model's efficacy across key risk groups, including women with dense breasts and those aged 55 years or older. Moreover, the model performed best in women with observed changes in mammographic density over time. These findings demonstrate that longitudinal modeling enhances breast cancer prediction and support the use of repeated mammograms to refine risk stratification in screening programs. LongiMam is publicly available as open-source software.


GLAM: Geometry-Guided Local Alignment for Multi-View VLP in Mammography

Du, Yuexi, Chen, Lihui, Dvornek, Nicha C.

arXiv.org Artificial Intelligence

Mammography screening is an essential tool for early detection of breast cancer. The speed and accuracy of mammography interpretation have the potential to be improved with deep learning methods. However, the development of a foundation visual language model (VLM) is hindered by limited data and domain differences between natural and medical images. Existing mammography VLMs, adapted from natural images, often ignore domain-specific characteristics, such as multi-view relationships in mammography. Unlike radiologists who analyze both views together to process ipsilateral correspondence, current methods treat them as independent images or do not properly model the multi-view correspondence learning, losing critical geometric context and resulting in suboptimal prediction. We propose GLAM: Global and Local Alignment for Multi-view mammography for VLM pretraining using geometry guidance. By leveraging the prior knowledge about the multi-view imaging process of mammograms, our model learns local cross-view alignments and fine-grained local features through joint global and local, visual-visual, and visual-language contrastive learning. Pretrained on EMBED [14], one of the largest open mammography datasets, our model outperforms baselines across multiple datasets under different settings.


Cross-Attention Multimodal Fusion for Breast Cancer Diagnosis: Integrating Mammography and Clinical Data with Explainability

Nantogmah, Muhaisin Tiyumba, Alhassan, Abdul-Barik, Alhassan, Salamudeen

arXiv.org Artificial Intelligence

A precise assessment of the risk of breast lesions can greatly lower it and assist physicians in choosing the best course of action. To categorise breast lesions, the majority of current computer-aided systems only use characteristics from mammograms. Although this method is practical, it does not completely utilise clinical reports' valuable information to attain the best results. When compared to utilising mammography alone, will clinical features greatly enhance the categorisation of breast lesions? How may clinical features and mammograms be combined most effectively? In what ways may explainable AI approaches improve the interpretability and reliability of models used to diagnose breast cancer? To answer these basic problems, a comprehensive investigation is desperately needed. In order to integrate mammography and categorical clinical characteristics, this study examines a number of multimodal deep networks grounded on feature concatenation, co-attention, and cross-attention. The model achieved an AUC-ROC of 0.98, accuracy of 0.96, F1-score of 0.94, precision of 0.92, and recall of 0.95 when tested on publicly accessible datasets (TCGA and CBIS-DDSM).