breast tissue
Breast density in MRI: an AI-based quantification and relationship to assessment in mammography
Chen, Yaqian, Li, Lin, Gu, Hanxue, Dong, Haoyu, Nguyen, Derek L., Kirk, Allan D., Mazurowski, Maciej A., Hwang, E. Shelley
Mammographic breast density is a well-established risk factor for breast cancer. Recently there has been interest in breast MRI as an adjunct to mammography, as this modality provides an orthogonal and highly quantitative assessment of breast tissue. However, its 3D nature poses analytic challenges related to delineating and aggregating complex structures across slices. Here, we applied an in-house machine-learning algorithm to assess breast density on normal breasts in three MRI datasets. Breast density was consistent across different datasets (0.104 - 0.114). Analysis across different age groups also demonstrated strong consistency across datasets and confirmed a trend of decreasing density with age as reported in previous studies. MR breast density was correlated with mammographic breast density, although some notable differences suggest that certain breast density components are captured only on MRI. Future work will determine how to integrate MR breast density with current tools to improve future breast cancer risk prediction.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.68)
- Health & Medicine > Therapeutic Area > Oncology > Breast Cancer (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
A Density-Informed Multimodal Artificial Intelligence Framework for Improving Breast Cancer Detection Across All Breast Densities
Kakileti, Siva Teja, Govindaraju, Bharath, Sampangi, Sudhakar, Manjunath, Geetha
Mammography, the current standard for breast cancer screening, has reduced sensitivity in women with dense breast tissue, contributing to missed or delayed diagnoses. Thermalytix, an AI-based thermal imaging modality, captures functional vascular and metabolic cues that may complement mammographic structural data. This study investigates whether a breast density-informed multi-modal AI framework can improve cancer detection by dynamically selecting the appropriate imaging modality based on breast tissue composition. A total of 324 women underwent both mammography and thermal imaging. Mammography images were analyzed using a multi-view deep learning model, while Thermalytix assessed thermal images through vascular and thermal radiomics. The proposed framework utilized Mammography AI for fatty breasts and Thermalytix AI for dense breasts, optimizing predictions based on tissue type. This multi-modal AI framework achieved a sensitivity of 94.55% (95% CI: 88.54-100) and specificity of 79.93% (95% CI: 75.14-84.71), outperforming standalone mammography AI (sensitivity 81.82%, specificity 86.25%) and Thermalytix AI (sensitivity 92.73%, specificity 75.46%). Importantly, the sensitivity of Mammography dropped significantly in dense breasts (67.86%) versus fatty breasts (96.30%), whereas Thermalytix AI maintained high and consistent sensitivity in both (92.59% and 92.86%, respectively). This demonstrates that a density-informed multi-modal AI framework can overcome key limitations of unimodal screening and deliver high performance across diverse breast compositions. The proposed framework is interpretable, low-cost, and easily deployable, offering a practical path to improving breast cancer screening outcomes in both high-resource and resource-limited settings.
- Asia > India > Karnataka > Bengaluru (0.04)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- Europe > Switzerland (0.04)
- (3 more...)
- Health & Medicine > Therapeutic Area > Oncology > Breast Cancer (1.00)
- Health & Medicine > Diagnostic Medicine (1.00)
MammoDINO: Anatomically Aware Self-Supervision for Mammographic Images
Zhou, Sicheng, Wu, Lei, Xiao, Cao, Bhatia, Parminder, Kass-Hout, Taha
Self-supervised learning (SSL) has transformed vision encoder training in general domains but remains underutilized in medical imaging due to limited data and domain specific biases. We present MammoDINO, a novel SSL framework for mammography, pretrained on 1.4 million mammographic images. To capture clinically meaningful features, we introduce a breast tissue aware data augmentation sampler for both image-level and patch-level supervision and a cross-slice contrastive learning objective that leverages 3D digital breast tomosynthesis (DBT) structure into 2D pretraining. MammoDINO achieves state-of-the-art performance on multiple breast cancer screening tasks and generalizes well across five benchmark datasets. It offers a scalable, annotation-free foundation for multipurpose computer-aided diagnosis (CAD) tools for mammogram, helping reduce radiologists' workload and improve diagnostic efficiency in breast cancer screening.
- North America > United States (0.14)
- North America > Canada > Ontario > Toronto (0.04)
- Europe (0.04)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Health & Medicine > Therapeutic Area > Oncology > Breast Cancer (0.80)
Mediator-Guided Multi-Agent Collaboration among Open-Source Models for Medical Decision-Making
Chen, Kaitao, Liu, Mianxin, Zong, Daoming, Ding, Chaoyue, Rui, Shaohao, Jiang, Yankai, Zhou, Mu, Wang, Xiaosong
Complex medical decision-making involves cooperative workflows operated by different clinicians. Designing AI multi-agent systems can expedite and augment human-level clinical decision-making. Existing multi-agent researches primarily focus on language-only tasks, yet their extension to multimodal scenarios remains challenging. A blind combination of diverse vision-language models (VLMs) can amplify an erroneous outcome interpretation. VLMs in general are less capable in instruction following and importantly self-reflection, compared to large language models (LLMs) of comparable sizes. This disparity largely constrains VLMs' ability in cooperative workflows. In this study, we propose MedOrch, a mediator-guided multi-agent collaboration framework for medical multimodal decision-making. MedOrch employs an LLM-based mediator agent that enables multiple VLM-based expert agents to exchange and reflect on their outputs towards collaboration. We utilize multiple open-source general-purpose and domain-specific VLMs instead of costly GPT-series models, revealing the strength of heterogeneous models. We show that the collaboration within distinct VLM-based agents can surpass the capabilities of any individual agent. We validate our approach on five medical vision question answering benchmarks, demonstrating superior collaboration performance without model training. Our findings underscore the value of mediator-guided multi-agent collaboration in advancing medical multimodal intelligence.
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
Breast cancer screenings may decline for women who receive false-positive test results, says study
High rates of false positive test results may be keeping women from sticking to recommended mammogram screenings for breast cancer, a new study has found. Researchers from UC Davis Comprehensive Cancer Center in Sacramento, California, reviewed more than 3.5 million screening mammograms performed among more than one million women between 2005 and 2017. Women who received a true-negative result were more likely to return for future screenings, with a 77% compliance rate. THESE 17 CANCER TYPES ARE MORE COMMON IN GEN X AND MILLENNIALS, AS STUDY NOTES'ALARMING TREND' By comparison, among those who received a false positive, only 61% returned for another mammogram in six months, and 67% returned for a recommended biopsy. The women, who ranged in age from 40 to 73, had not previously received a breast cancer diagnosis.
- North America > United States > California > Sacramento County > Sacramento (0.25)
- North America > United States > California > Orange County > Newport Beach (0.05)
Attention-Guided Erasing: A Novel Augmentation Method for Enhancing Downstream Breast Density Classification
Panambur, Adarsh Bhandary, Yu, Hui, Bhat, Sheethal, Madhu, Prathmesh, Bayer, Siming, Maier, Andreas
The assessment of breast density is crucial in the context of breast cancer screening, especially in populations with a higher percentage of dense breast tissues. This study introduces a novel data augmentation technique termed Attention-Guided Erasing (AGE), devised to enhance the downstream classification of four distinct breast density categories in mammography following the BI-RADS recommendation in the Vietnamese cohort. The proposed method integrates supplementary information during transfer learning, utilizing visual attention maps derived from a vision transformer backbone trained using the self-supervised DINO method. These maps are utilized to erase background regions in the mammogram images, unveiling only the potential areas of dense breast tissues to the network. Through the incorporation of AGE during transfer learning with varying random probabilities, we consistently surpass classification performance compared to scenarios without AGE and the traditional random erasing transformation. We validate our methodology using the publicly available VinDr-Mammo dataset. Specifically, we attain a mean F1-score of 0.5910, outperforming values of 0.5594 and 0.5691 corresponding to scenarios without AGE and with random erasing (RE), respectively. This superiority is further substantiated by t-tests, revealing a p-value of p<0.0001, underscoring the statistical significance of our approach.
- Asia > Vietnam > Hanoi > Hanoi (0.05)
- North America > United States > Virginia > Fairfax County > Reston (0.04)
- Europe > Germany > Bavaria > Middle Franconia > Nuremberg (0.04)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Health & Medicine > Therapeutic Area > Oncology > Breast Cancer (0.59)
Quilt-LLaVA: Visual Instruction Tuning by Extracting Localized Narratives from Open-Source Histopathology Videos
Seyfioglu, Mehmet Saygin, Ikezogwo, Wisdom O., Ghezloo, Fatemeh, Krishna, Ranjay, Shapiro, Linda
The gigapixel scale of whole slide images (WSIs) poses a challenge for histopathology multi-modal chatbots, requiring a global WSI analysis for diagnosis, compounding evidence from different WSI patches. Current visual instruction datasets, generated through large language models, focus on creating question/answer pairs for individual image patches, which may lack diagnostic capacity on their own in histopathology, further complicated by the absence of spatial grounding in histopathology image captions. To bridge this gap, we introduce Quilt-Instruct, a large-scale dataset of 107,131 histopathology-specific instruction question/answer pairs, that is collected by leveraging educational histopathology videos from YouTube, which provides spatial localization of captions by automatically extracting narrators' cursor movements. In addition, we provide contextual reasoning by extracting diagnosis and supporting facts from the entire video content to guide the extrapolative reasoning of GPT-4. Using Quilt-Instruct, we train Quilt-LLaVA, which can reason beyond the given single image patch, enabling diagnostic reasoning and the capability of spatial awareness. To evaluate Quilt-LLaVA, we propose a comprehensive evaluation dataset created from 985 images and 1283 human-generated question-answers. We also thoroughly evaluate Quilt-LLaVA using public histopathology datasets, where Quilt-LLaVA significantly outperforms SOTA by over 10% on relative GPT-4 score and 4% and 9% on open and closed set VQA. Our code, data, and model are publicly available at quilt-llava.github.io.
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Diagnostic Medicine (1.00)
- Health & Medicine > Therapeutic Area > Immunology (0.94)
- Health & Medicine > Therapeutic Area > Dermatology (0.68)
Wearable device with AI could allow for at-home breast cancer screenings: 'Accessible and personalized'
To provide women at a high risk of breast cancer with more frequent screenings between mammograms, researchers at the Massachusetts Institute of Technology (MIT) are developing a wearable ultrasound scanner designed to be attached to a bra. The goal is to help women detect breast cancer tumors in the early stages and maximize the survival rate, according to a press release on MIT's website. The researchers' aim was to design a wearable "miniaturized ultrasound device" that allows for "consistent placement and orientation" to take images of breast tissue, according to lead study author Canan Dagdeviren, PhD, associate professor at MIT. WHAT IS ARTIFICIAL INTELLIGENCE (AI)? The device attaches to the bra like a patch, with a "honeycomb" pattern that has open spaces for the tracker to move through for an optimal field of view, Dagdeviren told Fox News Digital. "The ultrasound generates a wave that penetrates the targeted breast tissue," he said.
- North America > United States > Massachusetts (0.25)
- North America > United States > Texas > Dallas County > Dallas (0.05)
- Health & Medicine > Therapeutic Area > Obstetrics/Gynecology (1.00)
- Health & Medicine > Therapeutic Area > Oncology > Breast Cancer (0.90)
- Government > Regional Government > North America Government > United States Government > FDA (0.33)
Investigating Pulse-Echo Sound Speed Estimation in Breast Ultrasound with Deep Learning
Simson, Walter A., Paschali, Magdalini, Sideri-Lampretsa, Vasiliki, Navab, Nassir, Dahl, Jeremy J.
Ultrasound is an adjunct tool to mammography that can quickly and safely aid physicians with diagnosing breast abnormalities. Clinical ultrasound often assumes a constant sound speed to form B-mode images for diagnosis. However, the various types of breast tissue, such as glandular, fat, and lesions, differ in sound speed. These differences can degrade the image reconstruction process. Alternatively, sound speed can be a powerful tool for identifying disease. To this end, we propose a deep-learning approach for sound speed estimation from in-phase and quadrature ultrasound signals. First, we develop a large-scale simulated ultrasound dataset that generates quasi-realistic breast tissue by modeling breast gland, skin, and lesions with varying echogenicity and sound speed. We developed a fully convolutional neural network architecture trained on a simulated dataset to produce an estimated sound speed map from inputting three complex-value in-phase and quadrature ultrasound images formed from plane-wave transmissions at separate angles. Furthermore, thermal noise augmentation is used during model optimization to enhance generalizability to real ultrasound data. We evaluate the model on simulated, phantom, and in-vivo breast ultrasound data, demonstrating its ability to accurately estimate sound speeds consistent with previously reported values in the literature. Our simulated dataset and model will be publicly available to provide a step towards accurate and generalizable sound speed estimation for pulse-echo ultrasound imaging.
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- North America > United States > California > Santa Clara County > Stanford (0.04)
- North America > United States > Virginia > Norfolk City County > Norfolk (0.04)
- (3 more...)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Health & Medicine > Therapeutic Area > Obstetrics/Gynecology (0.87)
- Health & Medicine > Therapeutic Area > Oncology > Breast Cancer (0.34)
La veille de la cybersécurité
October 07, 2021 – With artificial intelligence technology, medical professionals can quickly and accurately sort through breast MRIs in patients with dense breast tissue to eliminate those without cancer. Mammography has assisted in reducing breast cancer-related deaths by providing early detection when cancer is still treatable. However, it is less sensitive in women with extremely dense breast tissue than fatty breast tissue. Additionally, women with extremely dense breasts are three to six times more likely to develop breast cancer than women with almost entirely fatty breasts and two times more likely than the average woman.
- Health & Medicine > Therapeutic Area > Oncology > Breast Cancer (1.00)
- Health & Medicine > Therapeutic Area > Obstetrics/Gynecology (1.00)