ultrasound
This Chinese Startup Wants to Build a New Brain-Computer Interface--No Implant Required
Gestala is the latest company to emerge from China's burgeoning brain-computer interface industry. It plans to access the brain with noninvasive ultrasound technology. China's brain-computer interface industry is growing fast, and the newest company to emerge from the country is aiming to access the brain without the use of invasive implants . Gestala, newly founded in Chengdu with offices in Shanghai and Hong Kong, plans to use ultrasound technology to stimulate--and eventually read from--the brain, according to CEO and cofounder Phoenix Peng. It's the second company to launch in recent weeks with the aim of tapping into the brain with ultrasound.
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science > Neuroscience (0.94)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.73)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.73)
Sam Altman's New Brain Venture, Merge Labs, Will Spin Out of a Nonprofit
Merge Labs, a brain-computer interface startup that seeks to read brain activity using ultrasound, is being spun out of Forest Neurotech, a Los Angeles nonprofit. Samuel Altman, CEO of OpenAI, testifies in Washington, DC, on May 16, 2023. OpenAI CEO Sam Altman's new brain-computer interface startup, Merge Labs, is being spun out of the Los Angeles-based nonprofit Forest Neurotech, according to a source with direct knowledge of the plans. It will focus on using ultrasound to read brain activity. Along with Altman, WIRED has learned, Forest Neurotech's CEO Sumner Norman and chief scientific officer Tyson Aflalo are among the cofounders of Merge Labs, which is still in stealth mode.
- North America > United States > California > Los Angeles County > Los Angeles (0.45)
- North America > United States > District of Columbia > Washington (0.25)
- Europe > United Kingdom (0.05)
- (3 more...)
- Leisure & Entertainment (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
LAYER: A Quantitative Explainable AI Framework for Decoding Tissue-Layer Drivers of Myofascial Low Back Pain
Zeng, Zixue, Perti, Anthony M., Yu, Tong, Kokenberger, Grant, Lu, Hao-En, Wang, Jing, Meng, Xin, Sheng, Zhiyu, Satarpour, Maryam, Cormack, John M., Bean, Allison C., Nussbaum, Ryan P., Landis-Walkenhorst, Emily, Kim, Kang, Wasan, Ajay D., Pu, Jiantao
Myofascial pain (MP) is a leading cause of chronic low back pain, yet its tissue-level drivers remain poorly defined and lack reliable image biomarkers. Existing studies focus predominantly on muscle while neglecting fascia, fat, and other soft tissues that play integral biomechanical roles. We developed an anatomically grounded explainable artificial intelligence (AI) framework, LAYER (Layer-wise Analysis for Yielding Explainable Relevance Tissue), that analyses six tissue layers in three-dimensional (3D) ultrasound and quantifies their contribution to MP prediction. By utilizing the largest multi-model 3D ultrasound cohort consisting of over 4,000 scans, LAYER reveals that non-muscle tissues contribute substantially to pain prediction. In B-mode imaging, the deep fascial membrane (DFM) showed the highest saliency (0.420), while in combined B-mode and shear-wave images, the collective saliency of non-muscle layers (0.316) nearly matches that of muscle (0.317), challenging the conventional muscle-centric paradigm in MP research and potentially affecting the therapy methods. LAYER establishes a quantitative, interpretable framework for linking layer-specific anatomy to pain physiology, uncovering new tissue targets and providing a generalizable approach for explainable analysis of soft-tissue imaging.
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.05)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.94)
- Health & Medicine > Therapeutic Area > Musculoskeletal (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (0.93)
- Health & Medicine > Therapeutic Area > Neurology (0.71)
Three-Dimensional Anatomical Data Generation Based on Artificial Neural Networks
Müller, Ann-Sophia, Jeong, Moonkwang, Zhang, Meng, Tian, Jiyuan, Miernik, Arkadiusz, Speidel, Stefanie, Qiu, Tian
Surgical planning and training based on machine learning requires a large amount of 3D anatomical models reconstructed from medical imaging, which is currently one of the major bottlenecks. Obtaining these data from real patients and during surgery is very demanding, if even possible, due to legal, ethical, and technical challenges. It is especially difficult for soft tissue organs with poor imaging contrast, such as the prostate. To overcome these challenges, we present a novel workflow for automated 3D anatomical data generation using data obtained from physical organ models. We additionally use a 3D Generative Adversarial Network (GAN) to obtain a manifold of 3D models useful for other downstream machine learning tasks that rely on 3D data. We demonstrate our workflow using an artificial prostate model made of biomimetic hydrogels with imaging contrast in multiple zones. This is used to physically simulate endoscopic surgery. For evaluation and 3D data generation, we place it into a customized ultrasound scanner that records the prostate before and after the procedure. A neural network is trained to segment the recorded ultrasound images, which outperforms conventional, non-learning-based computer vision techniques in terms of intersection over union (IoU). Based on the segmentations, a 3D mesh model is reconstructed, and performance feedback is provided.
- North America > United States (0.05)
- Europe > Germany > Saxony > Dresden (0.05)
- Europe > Germany > Baden-Württemberg > Freiburg (0.04)
- (2 more...)
- Workflow (0.74)
- Research Report > New Finding (0.46)
- Health & Medicine > Therapeutic Area (0.95)
- Health & Medicine > Surgery (0.68)
- Health & Medicine > Diagnostic Medicine > Imaging (0.34)
UltraDP: Generalizable Carotid Ultrasound Scanning with Force-Aware Diffusion Policy
Chen, Ruoqu, Yan, Xiangjie, Lv, Kangchen, Huang, Gao, Li, Zheng, Li, Xiang
Ultrasound scanning is a critical imaging technique for real-time, non-invasive diagnostics. However, variations in patient anatomy and complex human-in-the-loop interactions pose significant challenges for autonomous robotic scanning. Existing ultrasound scanning robots are commonly limited to relatively low generalization and inefficient data utilization. To overcome these limitations, we present UltraDP, a Diffusion-Policy-based method that receives multi-sensory inputs (ultrasound images, wrist camera images, contact wrench, and probe pose) and generates actions that are fit for multi-modal action distributions in autonomous ultrasound scanning of carotid artery. We propose a specialized guidance module to enable the policy to output actions that center the artery in ultrasound images. To ensure stable contact and safe interaction between the robot and the human subject, a hybrid force-impedance controller is utilized to drive the robot to track such trajectories. Also, we have built a large-scale training dataset for carotid scanning comprising 210 scans with 460k sample pairs from 21 volunteers of both genders. By exploring our guidance module and DP's strong generalization ability, UltraDP achieves a 95% success rate in transverse scanning on previously unseen subjects, demonstrating its effectiveness.
- Europe > Switzerland (0.04)
- Asia > China > Hong Kong (0.04)
- Health & Medicine > Diagnostic Medicine > Imaging (0.68)
- Health & Medicine > Therapeutic Area > Cardiology/Vascular Diseases (0.50)
US-X Complete: A Multi-Modal Approach to Anatomical 3D Shape Recovery
Gafencu, Miruna-Alexandra, Velikova, Yordanka, Navab, Nassir, Azampour, Mohammad Farid
Ultrasound offers a radiation-free, cost-effective solution for real-time visualization of spinal landmarks, paraspinal soft tissues and neurovascular structures, making it valuable for intraoperative guidance during spinal procedures. However, ultrasound suffers from inherent limitations in visualizing complete vertebral anatomy, in particular vertebral bodies, due to acoustic shadowing effects caused by bone. In this work, we present a novel multi-modal deep learning method for completing occluded anatomical structures in 3D ultrasound by leveraging complementary information from a single X-ray image. To enable training, we generate paired training data consisting of: (1) 2D lateral vertebral views that simulate X-ray scans, and (2) 3D partial vertebrae representations that mimic the limited visibility and occlusions encountered during ultrasound spine imaging. Our method integrates morphological information from both imaging modalities and demonstrates significant improvements in vertebral reconstruction (p < 0.001) compared to state of art in 3D ultrasound vertebral completion. We perform phantom studies as an initial step to future clinical translation, and achieve a more accurate, complete volumetric lumbar spine visualization overlayed on the ultrasound scan without the need for registration with preoperative modalities such as computed tomography. This demonstrates that integrating a single X-ray projection mitigates ultrasound's key limitation while preserving its strengths as the primary imaging modality.
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.05)
- Europe > Austria > Salzburg > Salzburg (0.04)
- Asia > Japan > Kyūshū & Okinawa > Kyūshū > Fukuoka Prefecture > Fukuoka (0.04)
Robotic versus Human Teleoperation for Remote Ultrasound
Black, David, Salcudean, Septimiu
Abstract--Diagnostic medical ultrasound is widely used, safe, and relatively low cost but requires a high degree of expertise to acquire and interpret the images. Personnel with this expertise are often not available outside of larger cities, leading to difficult, costly travel and long wait times for rural populations. T o address this issue, tele-ultrasound techniques are being developed, including robotic teleoperation and recently human teleoperation, in which a novice user is remotely guided in a hand-overhand manner through mixed reality to perform an ultrasound exam. These methods have not been compared, and their relative strengths are unknown. Human teleoperation may be more practical than robotics for small communities due to its lower cost and complexity, but this is only relevant if the performance is comparable. This paper therefore evaluates the differences between human and robotic teleoperation, examining practical aspects such as setup time and flexibility and experimentally comparing performance metrics such as completion time, position tracking, and force consistency. It is found that human teleoperation does not lead to statistically significant differences in completion time or position accuracy, with mean differences of 1.8% and 0.5%, respectively, and provides more consistent force application despite being substantially more practical and accessible. Remote and under-resourced communities have far worse access to healthcare than larger cities [1], [2]. Ultrasound has become one of the most prevalent diagnostic imaging modalities due to its relatively low cost, non-invasive nature, and lack of radiation [3], but many communities have very limited access to qualified sonographers.
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.40)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- North America > United States > South Carolina > York County > Rock Hill (0.04)
- (3 more...)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.68)
Generative deep learning for foundational video translation in ultrasound
Bhatnagar, Nikolina Tomic Roshni, Jain, Sarthak, Lau, Connor, Liu, Tien-Yu, Gambini, Laura, Arnaout, Rima
Department of Medicine, Division of Cardiology Bakar Computational Health Sciences Institute UCSF - UC Berkeley Joint Program in Computational Precision Health Department of Radiology, Center for Intelligent Imaging University of California, San Francisco Corresponding Author Keywords: medical imaging, video translation, deep learning, image synthesis, ultrasound Word Count: 4129 Abstract Deep learning (DL) has the potential to revolutionize image acquisition and interpretation across medicine, h owever, attention to data imbalance and missin gness is required . U ltrasound data presents a particular challenge because in addition to different views and structures, it includes several sub - modalities -- such as greyscale and color flow doppler (CFD) -- that are often imbalanced in clinical studies . Image translation can help balance datasets but is challenging for ultrasound sub - modalities to date . Here, we present a generative method for ultrasound CFD - greyscale video translation, t rained on 5 4, 975 videos and tested on 8, 3 68 . The method developed leveraged pixel - wise, adversarial, and perceptual loses and utilized two networks: one for reconstructing anatomic structures and one for denoising to achieve realistic ultrasound imaging . A verage pairwise SSIM between synthetic videos and ground truth was 0.9 1 0.0 4 . Synthetic videos performed indistinguishably from real ones in DL classification and segmentation tasks and when evaluated by b linded clinical experts: F1 score was 0.9 for real and 0.89 for synthetic videos; Dice score between real and synthetic segmentation was 0.97. Overall c linician accuracy in distinguishing real vs synthetic videos was 54 6% (42 - 61%), indicating reali stic synthetic videos . Although trained only on heart videos, the model worked well on ultrasound spanning several clinical domains (av erage SSIM 0.91 0.0 5), demonstrating foundational abilit ies .
- North America > United States > California > San Francisco County > San Francisco (0.68)
- Asia > South Korea > Seoul > Seoul (0.04)
- Asia > Macao (0.04)
- Asia > China (0.04)
- Health & Medicine > Therapeutic Area > Cardiology/Vascular Diseases (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
Evaluating Generative AI as an Educational Tool for Radiology Resident Report Drafting
Verdone, Antonio, Cardall, Aidan, Siddiqui, Fardeen, Nashawaty, Motaz, Rigau, Danielle, Kwon, Youngjoon, Yousef, Mira, Patel, Shalin, Kieturakis, Alex, Kim, Eric, Heacock, Laura, Reig, Beatriu, Shen, Yiqiu
Objective: Radiology residents require timely, personalized feedback to develop accurate image analysis and reporting skills. Increasing clinical workload often limits attendings' ability to provide guidance. This study evaluates a HIPAA-compliant GPT-4o system that delivers automated feedback on breast imaging reports drafted by residents in real clinical settings. Methods: We analyzed 5,000 resident-attending report pairs from routine practice at a multi-site U.S. health system. GPT-4o was prompted with clinical instructions to identify common errors and provide feedback. A reader study using 100 report pairs was conducted. Four attending radiologists and four residents independently reviewed each pair, determined whether predefined error types were present, and rated GPT-4o's feedback as helpful or not. Agreement between GPT and readers was assessed using percent match. Inter-reader reliability was measured with Krippendorff's alpha. Educational value was measured as the proportion of cases rated helpful. Results: Three common error types were identified: (1) omission or addition of key findings, (2) incorrect use or omission of technical descriptors, and (3) final assessment inconsistent with findings. GPT-4o showed strong agreement with attending consensus: 90.5%, 78.3%, and 90.4% across error types. Inter-reader reliability showed moderate variability (α = 0.767, 0.595, 0.567), and replacing a human reader with GPT-4o did not significantly affect agreement (Δ = -0.004 to 0.002). GPT's feedback was rated helpful in most cases: 89.8%, 83.0%, and 92.0%. Discussion: ChatGPT-4o can reliably identify key educational errors. It may serve as a scalable tool to support radiology education.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > California > Ventura County > Thousand Oaks (0.04)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.68)
- Health & Medicine > Therapeutic Area (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
Dynamical model parameters from ultrasound tongue kinematics
Kirkham, Sam, Strycharczuk, Patrycja
A common approach is to cast this problem in terms of a dynamical system with point attractor dynamics, where a small number of parameters drive the vocal tract to a stable equilibrium position (Browman and Goldstein, 1986; Fowler, 1980; Gafos, 2006; Saltzman and Munhall, 1989; Tilsen, 2016). A standard model in this framework is the linear harmonic oscillator, m x + b x + kx = 0 (1) where m is mass (typically m = 1), k is a stiffness coefficient, and b is a damping coefficient, usually set to critically damped b = 2 mk. Gestural activation can be governed by step activation, with gestural parameters changing instantaneously at the point of activation and remaining constant over the activation interval. In this study we focus on whether the parameters of a linear harmonic oscillator can be estimated from ultrasound tongue imaging data, which we compare with the more common method of fitting to electromagnetic articulography (EMA) data. A major barrier to this goal is that the linear harmonic oscillator is known to be a poor fit to empirical articulatory trajectories, as it predicts overly short time-to-peak velocity, meaning that it is inappropriate for evaluating how the model can fit different data modalities. There are three common solutions to this issue. The first allows gestural activation to vary over time (Byrd and Saltzman, 1998), which adds extrinsic complexity to the model. The second is a nonlinear model, such as adding a cubic term to the linear model (Kirkham, 2025b; 2 Sorensen and Gafos, 2016), or novel nonlinear models (Stern and Shaw, 2025). The third is to abandon oscillatory models and develop new time-dependent (i.e.
- North America > Canada > Quebec > Montreal (0.04)
- Europe > United Kingdom > England > Greater Manchester > Manchester (0.04)
- Research Report > Experimental Study (0.46)
- Research Report > New Finding (0.34)