Goto

Collaborating Authors

 Deng, Xutian


Learning Autonomous Ultrasound via Latent Task Representation and Robotic Skills Adaptation

arXiv.org Artificial Intelligence

As medical ultrasound is becoming a prevailing examination approach nowadays, robotic ultrasound systems can facilitate the scanning process and prevent professional sonographers from repetitive and tedious work. Despite the recent progress, it is still a challenge to enable robots to autonomously accomplish the ultrasound examination, which is largely due to the lack of a proper task representation method, and also an adaptation approach to generalize learned skills across different patients. To solve these problems, we propose the latent task representation and the robotic skills adaptation for autonomous ultrasound in this paper. During the offline stage, the multimodal ultrasound skills are merged and encapsulated into a low-dimensional probability model through a fully self-supervised framework, which takes clinically demonstrated ultrasound images, probe orientations, and contact forces into account. During the online stage, the probability model will select and evaluate the optimal prediction. For unstable singularities, the adaptive optimizer fine-tunes them to near and stable predictions in high-confidence regions. Experimental results show that the proposed approach can generate complex ultrasound strategies for diverse populations and achieve significantly better quantitative results than our previous method.


Learning Ultrasound Scanning Skills from Human Demonstrations

arXiv.org Artificial Intelligence

Recently, the robotic ultrasound system has become an emerging topic owing to the widespread use of medical ultrasound. However, it is still a challenging task to model and to transfer the ultrasound skill from an ultrasound physician. In this paper, we propose a learning-based framework to acquire ultrasound scanning skills from human demonstrations. First, the ultrasound scanning skills are encapsulated into a high-dimensional multi-modal model in terms of interactions among ultrasound images, the probe pose and the contact force. The parameters of the model are learned using the data collected from skilled sonographers' demonstrations. Second, a sampling-based strategy is proposed with the learned model to adjust the extracorporeal ultrasound scanning process to guide a newbie sonographer or a robot arm. Finally, the robustness of the proposed framework is validated with the experiments on real data from sonographers.


Learning Robotic Ultrasound Scanning Skills via Human Demonstrations and Guided Explorations

arXiv.org Artificial Intelligence

The goal of our tasks images, position of probe, pose of probe, and contact force), the policy is to autonomously acquire ultrasound images with the centered region of should yield a befitting action. As mentioned above, four different sensory modalities ultrasound images have been proposed in [16], [17], [18], are closely related to the robotic ultrasound scanning skills, [19], [20], [21]. To the best of our knowledge, this is the first unified framework that learns As shown in Figure 3, with the target to perform autonomous the robotic ultrasound scanning skills representation and the ultrasound scanning process, the ultrasound scanning skills is corresponding manipulation skills from human demonstrations represented as a policy function π(s) a, which denotes the including the modalities of ultrasound image, pose/position mapping from the current state s to the predicted action a. of the probe and the contact force. The modeling and learning of policy π about the ultrasound scanning skills is described in the following section. B. Ultrasound Skills Modeling and Learning A. Problem Formulation of Ultrasound Scanning Tasks We proposed to use a deep neural network as shown For each ultrasound scanning task, the key point is to in Figure.