radiologist
- North America > United States > Minnesota (0.05)
- Europe > Netherlands > Gelderland > Nijmegen (0.05)
- Asia > China > Jiangsu Province > Nanjing (0.05)
- Asia > Middle East > Republic of Türkiye > İzmir Province > İzmir (0.04)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Diagnostic Medicine (0.73)
- Health & Medicine > Health Care Providers & Services (0.72)
- North America > Canada > Ontario > Toronto (0.04)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Health & Medicine > Nuclear Medicine (0.74)
- Europe > Switzerland (0.04)
- Europe > Slovenia > Drava > Municipality of Benedikt > Benedikt (0.04)
- Asia > Middle East > Israel (0.04)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Health & Medicine > Nuclear Medicine (0.72)
AI-assisted mammograms cut risk of developing aggressive breast cancer
People who are screened for breast cancer by AI-supported radiologists are less likely to develop aggressive cancers before their next screening round than those who are screened by radiologists alone, raising hopes that AI-assisted screening could save lives. "This is the first randomised controlled trial on the use of AI in mammography screening," says Kristina Lång at Lund University in Sweden. The AI-supported approach involves using the software - which has been trained on more than 200,000 mammography scans from 10 countries - to rank the likelihood of cancer being present in mammograms on a scale of 1 to 10, based on visual patterns in the scans. The scans receiving a score of 1 to 9 are then assessed by one experienced radiologist, while scans receiving a score of 10 - indicating cancer is most likely to be present - are assessed by two experienced radiologists. An earlier study found that this approach could detect 29 per cent more cancers than standard screening, where each mammogram is assessed by two radiologists, without increasing the rate of false detections - where a growth is flagged but follow-up tests reveal it isn't actually there or wouldn't go on to cause problems.
- Europe > Sweden (0.26)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.05)
- Europe > Netherlands > Gelderland > Nijmegen (0.05)
- Research Report > Strength High (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area > Oncology > Breast Cancer (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
AI companies will fail. We can salvage something from the wreckage Cory Doctorow
AI is asbestos in the walls of our tech society, stuffed there by monopolists run amok. What I do not do is predict the future. No one can predict the future, which is a good thing, since if the future were predictable, that would mean we couldn't change it. Now, not everyone understands the distinction. They think science-fiction writers are oracles. Even some of my colleagues labor under the delusion that we can "see the future". Then there are science-fiction fans who believe that they are the future. A depressing number of those people appear to have become AI bros. These guys can't shut up about the day that their spicy autocomplete machine will wake up and turn us all into paperclips has led many confused journalists and conference organizers to try to get me to comment on the future of AI. That's something I used to strenuously resist doing, because I wasted two years of my life explaining patiently and repeatedly why I thought crypto was stupid, and getting relentlessly bollocked by cryptocurrency cultists who at first insisted that I just didn't understand crypto.
- North America > United States (0.68)
- Europe > Ukraine (0.04)
- Oceania > Australia (0.04)
- Media (1.00)
- Law (1.00)
- Information Technology (1.00)
- (3 more...)
Eye-gaze Guided Multi-modal Alignment for Medical Representation Learning
In the medical multi-modal frameworks, the alignment of cross-modality features presents a significant challenge. However, existing works have learned features that are implicitly aligned from the data, without considering the explicit relationships in the medical context. This data-reliance may lead to low generalization of the learned alignment relationships. In this work, we propose the Eye-gaze Guided Multi-modal Alignment (EGMA) framework to harness eye-gaze data for better alignment of medical visual and textual features. We explore the natural auxiliary role of radiologists' eye-gaze data in aligning medical images and text, and introduce a novel approach by using eye-gaze data, collected synchronously by radiologists during diagnostic evaluations. We conduct downstream tasks of image classification and image-text retrieval on four medical datasets, where EGMA achieved state-of-the-art performance and stronger generalization across different datasets. Additionally, we explore the impact of varying amounts of eye-gaze data on model performance, highlighting the feasibility and utility of integrating this auxiliary data into multi-modal alignment framework.
Hide-and-Seek Attribution: Weakly Supervised Segmentation of Vertebral Metastases in CT
Atad, Matan, Marka, Alexander W., Steinhelfer, Lisa, Curto-Vilalta, Anna, Leonhardt, Yannik, Foreman, Sarah C., Dietrich, Anna-Sophia Walburga, Graf, Robert, Gersing, Alexandra S., Menze, Bjoern, Rueckert, Daniel, Kirschke, Jan S., Möller, Hendrik
Accurate segmentation of vertebral metastasis in CT is clinically important yet difficult to scale, as voxel-level annotations are scarce and both lytic and blastic lesions often resemble benign degenerative changes. We introduce a weakly supervised method trained solely on vertebra-level healthy/malignant labels, without any lesion masks. The method combines a Diffusion Autoencoder (DAE) that produces a classifier-guided healthy edit of each vertebra with pixel-wise difference maps that propose candidate lesion regions. To determine which regions truly reflect malignancy, we introduce Hide-and-Seek Attribution: each candidate is revealed in turn while all others are hidden, the edited image is projected back to the data manifold by the DAE, and a latent-space classifier quantifies the isolated malignant contribution of that component. High-scoring regions form the final lytic or blastic segmentation. On held-out radiologist annotations, we achieve strong blastic/lytic performance despite no mask supervision (F1: 0.91/0.85; Dice: 0.87/0.78), exceeding baselines (F1: 0.79/0.67; Dice: 0.74/0.55). These results show that vertebra-level labels can be transformed into reliable lesion masks, demonstrating that generative editing combined with selective occlusion supports accurate weakly supervised segmentation in CT.
- Europe > Switzerland > Zürich > Zürich (0.14)
- Europe > Slovenia > Drava > Municipality of Benedikt > Benedikt (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- (5 more...)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Nuclear Medicine (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
Clinical Interpretability of Deep Learning Segmentation Through Shapley-Derived Agreement and Uncertainty Metrics
Ren, Tianyi, Low, Daniel, Jaengprajak, Pittra, Rivera, Juampablo Heras, Ruzevick, Jacob, Kurt, Mehmet
Segmentation is the identification of anatomical regions of interest, such as organs, tissue, and lesions, serving as a fundamental task in computer-aided diagnosis in medical imaging. Although deep learning models have achieved remarkable performance in medical image segmentation, the need for explainability remains critical for ensuring their acceptance and integration in clinical practice, despite the growing research attention in this area. Our approach explored the use of contrast-level Shapley values, a systematic perturbation of model inputs to assess feature importance. While other studies have investigated gradient-based techniques through identifying influential regions in imaging inputs, Shapley values offer a broader, clinically aligned approach, explaining how model performance is fairly attributed to certain imaging contrasts over others. Using the BraTS 2024 dataset, we generated rankings for Shapley values for four MRI contrasts across four model architectures. Two metrics were proposed from the Shapley ranking: agreement between model and ``clinician" imaging ranking, and uncertainty quantified through Shapley ranking variance across cross-validation folds. Higher-performing cases (Dice \textgreater0.6) showed significantly greater agreement with clinical rankings. Increased Shapley ranking variance correlated with decreased performance (U-Net: $r=-0.581$). These metrics provide clinically interpretable proxies for model reliability, helping clinicians better understand state-of-the-art segmentation models.
- North America > United States > Washington > King County > Seattle (0.05)
- Europe > Middle East > Republic of Türkiye > Istanbul Province > Istanbul (0.04)
- Asia > Middle East > Republic of Türkiye > Istanbul Province > Istanbul (0.04)