vein
Closing the Performance Gap Between AI and Radiologists in Chest X-Ray Reporting
Sharma, Harshita, Reynolds, Maxwell C., Salvatelli, Valentina, Sykes, Anne-Marie G., Horst, Kelly K., Schwaighofer, Anton, Ilse, Maximilian, Melnichenko, Olesya, Bond-Taylor, Sam, Pérez-García, Fernando, Mugu, Vamshi K., Chan, Alex, Colak, Ceylan, Swartz, Shelby A., Nashawaty, Motassem B., Gonzalez, Austin J., Ouellette, Heather A., Erdal, Selnur B., Schueler, Beth A., Wetscherek, Maria T., Codella, Noel, Jain, Mohit, Bannur, Shruthi, Bouzid, Kenza, Castro, Daniel C., Hyland, Stephanie, Korfiatis, Panos, Khandelwal, Ashish, Alvarez-Valle, Javier
AI-assisted report generation offers the opportunity to reduce radiologists' workload stemming from expanded screening guidelines, complex cases and workforce shortages, while maintaining diagnostic accuracy. In addition to describing pathological findings in chest X-ray reports, interpreting lines and tubes (L&T) is demanding and repetitive for radiologists, especially with high patient volumes. We introduce MAIRA-X, a clinically evaluated multimodal AI model for longitudinal chest X-ray (CXR) report generation, that encompasses both clinical findings and L&T reporting. Developed using a large-scale, multi-site, longitudinal dataset of 3.1 million studies (comprising 6 million images from 806k patients) from Mayo Clinic, MAIRA-X was evaluated on three holdout datasets and the public MIMIC-CXR dataset, where it significantly improved AI-generated reports over the state of the art on lexical quality, clinical correctness, and L&T-related elements. A novel L&T-specific metrics framework was developed to assess accuracy in reporting attributes such as type, longitudinal change and placement. A first-of-its-kind retrospective user evaluation study was conducted with nine radiologists of varying experience, who blindly reviewed 600 studies from distinct subjects. The user study found comparable rates of critical errors (3.0% for original vs. 4.6% for AI-generated reports) and a similar rate of acceptable sentences (97.8% for original vs. 97.4% for AI-generated reports), marking a significant improvement over prior user studies with larger gaps and higher error rates. Our results suggest that MAIRA-X can effectively assist radiologists, particularly in high-volume clinical settings.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > North Dakota > Burke County (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Nuclear Medicine (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.94)
- Information Technology > Sensing and Signal Processing > Image Processing (0.92)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.67)
VoxTell: Free-Text Promptable Universal 3D Medical Image Segmentation
Rokuss, Maximilian, Langenberg, Moritz, Kirchhoff, Yannick, Isensee, Fabian, Hamm, Benjamin, Ulrich, Constantin, Regnery, Sebastian, Bauer, Lukas, Katsigiannopulos, Efthimios, Norajitra, Tobias, Maier-Hein, Klaus
We introduce VoxTell, a vision-language model for text-prompted volumetric medical image segmentation. It maps free-form descriptions, from single words to full clinical sentences, to 3D masks. Trained on 62K+ CT, MRI, and PET volumes spanning over 1K anatomical and pathological classes, VoxTell uses multi-stage vision-language fusion across decoder layers to align textual and visual features at multiple scales. It achieves state-of-the-art zero-shot performance across modalities on unseen datasets, excelling on familiar concepts while generalizing to related unseen classes. Extensive experiments further demonstrate strong cross-modality transfer, robustness to linguistic variations and clinical language, as well as accurate instance-specific segmentation from real-world text. Code is available at: https://www.github.com/MIC-DKFZ/VoxTell
- Europe > Switzerland (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Pennsylvania (0.04)
- (7 more...)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Vision > Image Understanding (0.87)
The Martian permafrost may be hiding veins of habitable liquid water
Mars may have a network of liquid water flowing through the frozen ground. All buried permafrost, on Earth and beyond, is expected to host narrow veins of liquid, and new calculations show on Mars, they could be big enough to support living organisms. "For Mars we always live on the edge of maybe habitable, maybe not, so I set out to do this research thinking maybe I can close this loop and say that it's very unlikely to have enough water and have it be arranged so that it's habitable for microbes," says Hanna Sizemore at the Planetary Science Institute in Arizona. She and her colleagues used measurements of the soil composition on Mars to calculate how much of the icy soil could actually be liquid water and the size of the channels that water would run through. It is tricky to keep water liquid on Mars, because temperatures can get as low as -150 C (-240 F) on the planet.
- North America > United States > Arizona (0.26)
- North America > United States > Texas (0.05)
- North America > United States > Colorado > Boulder County > Boulder (0.05)
MRI-derived quantification of hepatic vessel-to-volume ratios in chronic liver disease using a deep learning approach
Herold, Alexander, Sobotka, Daniel, Beer, Lucian, Bastati, Nina, Poetter-Lang, Sarah, Weber, Michael, Reiberger, Thomas, Mandorfer, Mattias, Semmler, Georg, Simbrunner, Benedikt, Wichtmann, Barbara D., Ba-Ssalamah, Sami A., Trauner, Michael, Ba-Ssalamah, Ahmed, Langs, Georg
Computational Imaging Research Lab, Department of Biomedical Imaging and Image - guided Therapy, Medical University of Vienna, Austria . Abstract (2 50 words) Background We aimed to quantify hepatic vessel volumes across chronic liver disease stages and healthy controls using deep learning - based magnetic resonance imaging ( MRI) analysis, and assess correlations with biomarkers for liver (dys)function and fibrosis/portal hypertension. Methods We assessed retrospectively healthy controls, non - advanced and advanced chronic liver disease (ACLD) patients using a 3D U - Net model for hepatic vessel segmentation on portal venous phase gadoxetic acid - enhanced 3 - T MRI. Total (TVVR), hepatic (HVVR), and intrahepatic portal vein - to - volume ratios (PVVR) were compared between groups and c orrelat ed with: a lbumin - b ilirubin [ ALBI ] and "m odel for e nd - s tage l iver d isease - s odium " [ MELD - Na ] s core) and fibrosis/portal hypertension (Fibrosis - 4 [ FIB - 4 ] Score, liver stiffness measurement [ LSM ], hepatic venous pressure gradient [ HVPG ], platelet count [ PLT ], and spleen volume. Results We included 197 subjects, aged 54.9 13.8 years (mean standard deviation), 111 males ( 56 .3 TVVR and HVVR were highest in controls (3.9; 2.1), intermediate in non - ACLD (2.8; 1.7), and lowest in ACLD patients (2.3; 1.0) ( p 0. 001) . PVVR was reduced in both non - ACLD and ACLD patients (both 1.2) compared to controls (1.7) ( p 0. 001), but showed no difference between CLD groups ( p = 0.999) . TVVR and PVVR showed similar but weaker correlations. Conclusion s Deep learning - based hepatic vessel volumetry demonstrate d differences between healthy liver and chronic liver disease stages and shows correlations with established markers of disease severity. Relevance s tatement Hepatic vessel volumetry demonstrates differences between healthy liver and chronic liver disease stages, potentially serving as a non - invasive imaging biomarker.
- Europe > Austria > Vienna (0.55)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > Switzerland > Basel-City > Basel (0.04)
- (6 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area > Nephrology (1.00)
- Health & Medicine > Therapeutic Area > Hepatology (1.00)
- Health & Medicine > Therapeutic Area > Gastroenterology (1.00)
- (2 more...)
Artery-Vein Segmentation from Fundus Images using Deep Learning
SK, Sharan, Sahayam, Subin, Jayaraman, Umarani, A, Lakshmi Priya
Segmenting of clinically important retinal blood vessels into arteries and veins is a prerequisite for retinal vessel analysis. Such analysis can provide potential insights and bio-markers for identifying and diagnosing various retinal eye diseases. Alteration in the regularity and width of the retinal blood vessels can act as an indicator of the health of the vasculature system all over the body. It can help identify patients at high risk of developing vasculature diseases like stroke and myocardial infarction. Over the years, various Deep Learning architectures have been proposed to perform retinal vessel segmentation. Recently, attention mechanisms have been increasingly used in image segmentation tasks. The work proposes a new Deep Learning approach for artery-vein segmentation. The new approach is based on the Attention mechanism that is incorporated into the WNet Deep Learning model, and we call the model as Attention-WNet. The proposed approach has been tested on publicly available datasets such as HRF and DRIVE datasets. The proposed approach has outperformed other state-of-art models available in the literature.
- Asia > India > Tamil Nadu > Chennai (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
A Deep Learning-Driven Autonomous System for Retinal Vein Cannulation: Validation Using a Chicken Embryo Model
Wang, Yi, Zhang, Peiyao, Esfandiari, Mojtaba, Gehlbach, Peter, Iordachita, Iulian I.
-- Retinal vein cannulation (RVC) is a minimally invasive microsurgical procedure for treating retinal vein occlusion (RVO), a leading cause of vision impairment. However, the small size and fragility of retinal veins, coupled with the need for high-precision, tremor-free needle manipulation, create significant technical challenges. These limitations highlight the need for robotic assistance to improve accuracy and stability. This study presents an automated robotic system with a top-down microscope and B-scan optical coherence tomography (OCT) imaging for precise depth sensing. Deep learning-based models enable real-time needle navigation, contact detection, and vein puncture recognition, using a chicken embryo model as a surrogate for human retinal veins. The experiments demonstrate notable reductions in navigation and puncture times compared to manual methods. Our results demonstrate the potential of integrating advanced imaging and deep learning to automate microsurgical tasks, providing a pathway for safer and more reliable RVC procedures with enhanced precision and reproducibility. I. INTRODUCTION Retinal vein occlusion (RVO) occurs due to the blockage of a retinal vein by a thrombus, leading to transient or permanent vision loss [1]. Current treatments focus on managing complications, but no standardized surgical approach exists for thrombus removal. A 2015 meta-analysis identified RVO as the second most prevalent retinal vascular disease globally, affecting 28.06 million people aged 30-89, including 23.38 million branch RVO (BRVO) and 4.67 million central RVO (CRVO) [2]. Retinal vein cannulation (RVC) involves inserting a micro-needle into the occluded retinal vein, followed by injecting a thrombolytic agent to dissolve the clot [3].
Ultrasound-Guided Robotic Blood Drawing and In Vivo Studies on Submillimetre Vessels of Rats
Jing, Shuaiqi, Yao, Tianliang, Zhang, Ke, Wu, Di, Wang, Qiulin, Chen, Zixi, Chen, Ke, Qi, Peng
Billions of vascular access procedures are performed annually worldwide, serving as a crucial first step in various clinical diagnostic and therapeutic procedures. For pediatric or elderly individuals, whose vessels are small in size (typically 2 to 3 mm in diameter for adults and less than 1 mm in children), vascular access can be highly challenging. This study presents an image-guided robotic system aimed at enhancing the accuracy of difficult vascular access procedures. The system integrates a 6-DoF robotic arm with a 3-DoF end-effector, ensuring precise navigation and needle insertion. Multi-modal imaging and sensing technologies have been utilized to endow the medical robot with precision and safety, while ultrasound imaging guidance is specifically evaluated in this study. To evaluate in vivo vascular access in submillimeter vessels, we conducted ultrasound-guided robotic blood drawing on the tail veins (with a diameter of 0.7 plus or minus 0.2 mm) of 40 rats. The results demonstrate that the system achieved a first-attempt success rate of 95 percent. The high first-attempt success rate in intravenous vascular access, even with small blood vessels, demonstrates the system's effectiveness in performing these procedures. This capability reduces the risk of failed attempts, minimizes patient discomfort, and enhances clinical efficiency.
- Research Report > Experimental Study (0.94)
- Research Report > New Finding (0.86)
Deep learning approaches to surgical video segmentation and object detection: A Scoping Review
Kamtam, Devanish N., Shrager, Joseph B., Malla, Satya Deepya, Lin, Nicole, Cardona, Juan J., Kim, Jake J., Hu, Clarence
Introduction: Computer vision (CV) has had a transformative impact in biomedical fields such as radiology, dermatology, and pathology. Its real-world adoption in surgical applications, however, remains limited. We review the current state-of-the-art performance of deep learning (DL)-based CV models for segmentation and object detection of anatomical structures in videos obtained during surgical procedures. Methods: We conducted a scoping review of studies on semantic segmentation and object detection of anatomical structures published between 2014 and 2024 from 3 major databases - PubMed, Embase, and IEEE Xplore. The primary objective was to evaluate the state-of-the-art performance of semantic segmentation in surgical videos. Secondary objectives included examining DL models, progress toward clinical applications, and the specific challenges with segmentation of organs/tissues in surgical videos. Results: We identified 58 relevant published studies. These focused predominantly on procedures from general surgery [20(34.4%)], colorectal surgery [9(15.5%)], and neurosurgery [8(13.8%)]. Cholecystectomy [14(24.1%)] and low anterior rectal resection [5(8.6%)] were the most common procedures addressed. Semantic segmentation [47(81%)] was the primary CV task. U-Net [14(24.1%)] and DeepLab [13(22.4%)] were the most widely used models. Larger organs such as the liver (Dice score: 0.88) had higher accuracy compared to smaller structures such as nerves (Dice score: 0.49). Models demonstrated real-time inference potential ranging from 5-298 frames-per-second (fps). Conclusion: This review highlights the significant progress made in DL-based semantic segmentation for surgical videos with real-time applicability, particularly for larger organs. Addressing challenges with smaller structures, data availability, and generalizability remains crucial for future advancements.
- North America > United States > California > Santa Clara County > Palo Alto (0.14)
- North America > United States > California > Santa Clara County > Stanford (0.14)
- Research Report > Experimental Study (1.00)
- Overview (1.00)
- Research Report > New Finding (0.93)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Therapeutic Area > Gastroenterology (1.00)
- Health & Medicine > Surgery (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (0.88)
'Mainlined into UK's veins': Labour announces huge public rollout of AI
Artificial intelligence will be "mainlined into the veins" of the nation, ministers have announced, with a multibillion-pound investment in the UK's computing capacity despite widespread public fear about the technology's effects. Keir Starmer will launch a sweeping action plan to increase 20-fold the amount of AI computing power under public control by 2030 and deploy AI for everything from spotting potholes to freeing up teachers to teach. Labour's plan to "unleash" AI includes a personal pledge from the prime minister to make Britain "the world leader" in a sector that has been transformed by a series of significant breakthroughs in the last three years. The government plan features a potentially controversial scheme to unlock public data to help fuel the growth of AI businesses. This includes anonymised NHS data, which will be available for "researchers and innovators" to train their AI models.
- Europe > United Kingdom > Wales (0.05)
- Europe > United Kingdom > England > Oxfordshire (0.05)
- Asia > China (0.05)
ReXplain: Translating Radiology into Patient-Friendly Video Reports
Luo, Luyang, Vairavamurthy, Jenanan, Zhang, Xiaoman, Kumar, Abhinav, Ter-Oganesyan, Ramon R., Schroff, Stuart T., Shilo, Dan, Hossain, Rydhwana, Moritz, Mike, Rajpurkar, Pranav
Radiology reports, designed for efficient communication between medical experts, often remain incomprehensible to patients. This inaccessibility could potentially lead to anxiety, decreased engagement in treatment decisions, and poorer health outcomes, undermining patient-centered care. We present ReXplain (Radiology eXplanation), an innovative AI-driven system that translates radiology findings into patient-friendly video reports. ReXplain uniquely integrates a large language model for medical text simplification and text-anatomy association, an image segmentation model for anatomical region identification, and an avatar generation tool for engaging interface visualization. ReXplain enables producing comprehensive explanations with plain language, highlighted imagery, and 3D organ renderings in the form of video reports. To evaluate the utility of ReXplain-generated explanations, we conducted two rounds of user feedback collection from six board-certified radiologists. The results of this proof-of-concept study indicate that ReXplain could accurately deliver radiological information and effectively simulate one-on-one consultation, shedding light on enhancing patient-centered radiology with potential clinical usage. This work demonstrates a new paradigm in AI-assisted medical communication, potentially improving patient engagement and satisfaction in radiology care, and opens new avenues for research in multimodal medical communication.
- North America > United States > Maryland (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- North America > United States > California > Los Angeles County > Los Angeles (0.04)
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (0.95)
- Health & Medicine > Nuclear Medicine (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)