BUSTR: Breast Ultrasound Text Reporting with a Descriptor-Aware Vision-Language Model
Mohammed, Rawa, Attin, Mina, Shareef, Bryar
–arXiv.org Artificial Intelligence
Automated radiology report generation (RRG) for breast ultrasound (BUS) is limited by the lack of paired image-report datasets and the risk of hallucinations from large language models. We propose BUSTR, a multitask vision-language framework that generates BUS reports without requiring paired image-report supervision. BUSTR constructs reports from structured descriptors (e.g., BI-RADS, pathology, histology) and radiomics features, learns descriptor-aware visual representations with a multi-head Swin encoder trained using a multitask loss over dataset-specific descriptor sets, and aligns visual and textual tokens via a dual-level objective that combines token-level cross-entropy with a cosine-similarity alignment loss between input and output representations. We evaluate BUSTR on two public BUS datasets, BrEaST and BUS-BRA, which differ in size and available descriptors. Across both datasets, BUSTR consistently improves standard natural language generation metrics and clinical efficacy metrics, particularly for key targets such as BI-RADS category and pathology. Our results show that this descriptor-aware vision model, trained with a combined token-level and alignment loss, improves both automatic report metrics and clinical efficacy without requiring paired image-report data. The source code can be found at https://github.com/AAR-UNLV/BUSTR
arXiv.org Artificial Intelligence
Nov-27-2025
- Country:
- Europe
- Denmark > Central Jutland
- Aarhus (0.04)
- Switzerland > Basel-City
- Basel (0.04)
- Denmark > Central Jutland
- North America > United States
- Nevada (0.04)
- New Mexico > Bernalillo County
- Albuquerque (0.04)
- Europe
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Health & Medicine
- Diagnostic Medicine > Imaging (1.00)
- Therapeutic Area > Oncology (1.00)
- Health & Medicine
- Technology: