Stability and Transparency in Mixed Reality Bilateral Human Teleoperation

Black, David Gregory, Salcudean, Septimiu

arXiv.org Artificial Intelligence 

--Recent work introduced the concept of human teleoperation (HT), where the remote robot typically considered in conventional bilateral teleoperation is replaced by a novice person wearing a mixed reality head mounted display and tracking the motion of a virtual tool controlled by an expert. HT has advantages in cost, complexity, and patient acceptance for telemedicine in low-resource communities or remote locations. However, the stability, transparency, and performance of bilateral HT are unexplored. In this paper, we therefore develop a mathematical model and simulation of the HT system using test data. We then analyze various control architectures with this model and implement them with the HT system to find the achievable performance, investigate stability, and determine the most promising teleoperation scheme in the presence of time delays. We show that instability in HT, while not destructive or dangerous, makes the system impossible to use. However, stable and transparent teleoperation are possible with small time delays ( < 200 ms) through 3-channel teleoperation, or with large time delays through model-mediated teleoperation with local pose and force feedback for the novice. Many remote and underresourced communities experience severe challenges in accessing qualified medical care. For example, ultrasound imaging is important, widely used, and much lower cost than other modalities such as CT or MR. However, capturing and interpreting ultrasound images requires a high degree of expertise that is not commonly present in many small communities. As a result, a sonographer or radiologist must be transported to the town on a regular basis, or patients must be sent to a major medical center. Either case leads to long wait times and difficulty handling urgent cases. In communities across Canada, patients are flown hundreds of kilometers for standard ultrasound exams. This takes up to three days and exerts a high social and financial cost on the community. Therefore, tele-ultrasound is an important and growing field. However, current commercially available technologies are often impractical. Video teleguidance is simple, low-cost, and accessible to anyone but is highly inefficient and imprecise if the person being guided does not already have ultrasound experience [1]. On the other hand, robotic teleultrasound gives the physician complete and precise control but is expensive and complex to set up and maintain. We thus recently introduced a novel teleguidance method called human teleoperation to address the shortcomings of both existing approaches [1], [2]. This method is also applicable to many other remote guidance applications. In human teleoperation, a local novice, the "follower", performs an ultrasound exam on a patient while being guided by a remote operator, the sonographer or radiologist. The follower wears a mixed reality (MR) head-mounted display (HMD) which projects a virtual ultrasound probe into their field of view.