hololen 2
MRHaD: Mixed Reality-based Hand-Drawn Map Editing Interface for Mobile Robot Navigation
Taki, Takumi, Kobayashi, Masato, Iglesius, Eduardo, Chiba, Naoya, Shirai, Shizuka, Uranishi, Yuki
-- Mobile robot navigation systems are increasingly relied upon in dynamic and complex environments, yet they often struggle with map inaccuracies and the resulting inefficient path planning. This paper presents MRHaD, a Mixed Reality-based Hand-drawn Map Editing Interface that enables intuitive, real-time map modifications through natural hand gestures. By integrating the MR head-mounted display with the robotic navigation system, operators can directly create hand-drawn restricted zones (HRZ), thereby bridging the gap between 2D map representations and the real-world environment. Comparative experiments against conventional 2D editing methods demonstrate that MRHaD significantly improves editing efficiency, map accuracy, and overall usability, contributing to safer and more efficient mobile robot operations. The proposed approach provides a robust technical foundation for advancing human-robot collaboration and establishing innovative interaction models that enhance the hybrid future of robotics and human society. I. INTRODUCTION Recent advances in autonomous mobile robots have opened up new opportunities for human-robot collaboration in various application domains, including logistics, healthcare, and public spaces [1], [2], [3]. Typically, these robots use pre-constructed environmental maps and dynamically adjust their paths based on real-time environmental sensing with various onboard sensors. Path planning methods are generally divided into two categories: global planning and local planning [4].
- Asia > Japan > Honshū > Kansai > Osaka Prefecture > Osaka (0.05)
- North America > United States > Hawaii (0.04)
Gesture Recognition for Feedback Based Mixed Reality and Robotic Fabrication: A Case Study of the UnLog Tower
Kyaw, Alexander Htet, Spencer, Lawson, Zivkovic, Sasa, Lok, Leslie
Mixed Reality (MR) platforms enable users to interact with three-dimensional holographic instructions during the assembly and fabrication of highly custom and parametric architectural constructions without the necessity of two-dimensional drawings. Previous MR fabrication projects have primarily relied on digital menus and custom buttons as the interface for user interaction with the MR environment. Despite this approach being widely adopted, it is limited in its ability to allow for direct human interaction with physical objects to modify fabrication instructions within the MR environment. This research integrates user interactions with physical objects through real-time gesture recognition as input to modify, update or generate new digital information enabling reciprocal stimuli between the physical and the virtual environment. Consequently, the digital environment is generative of the user's provided interaction with physical objects to allow seamless feedback in the fabrication process. This research investigates gesture recognition for feedback-based MR workflows for robotic fabrication, human assembly, and quality control in the construction of the UnLog Tower.
- North America > United States > Texas > Travis County > Austin (0.14)
- North America > United States > New York > Tompkins County > Ithaca (0.04)
- North America > Mexico > Mexico City > Mexico City (0.04)
- (4 more...)
Walk along: An Experiment on Controlling the Mobile Robot 'Spot' with Voice and Gestures
Zhang, Renchi, van der Linden, Jesse, Dodou, Dimitra, Seyffert, Harleigh, Eisma, Yke Bauke, de Winter, Joost C. F.
Walk along: An Experiment on Controlling the Mobile Robot'Spot' with Voice and Gestures Abstract Robots are becoming increasingly intelligent and can autonomously perform tasks such as navigating between locations. However, human oversight remains crucial. This study compared two handsfree methods for directing mobile robots: voice control and gesture control. These methods were tested with the human stationary and walking freely. We hypothesized that walking with the robot would lead to higher intuitiveness ratings and better task performance due to increased stimulus-response compatibility, assuming humans align themselves with the robot. In a 2 2 within-subject design, 218 participants guided the quadrupedal robot Spot using 90 rotation and walk-forward commands. After each trial, participants rated the intuitiveness of the command mapping, while post-experiment interviews were used to gather the participants' preferences. Results showed that voice control combined with walking with Spot was the most favored and intuitive, while gesture control while standing caused confusion for left/right commands. Despite this, 29% of participants preferred gesture control, citing task engagement and visual congruence as reasons. An odometry-based analysis revealed that participants aligned behind Spot, particularly in the gesture control condition, when allowed to walk. In conclusion, voice control with walking produced the best outcomes. Improving physical ergonomics and adjusting gesture types could improve the effectiveness of gesture control. Introduction Robots have traditionally been viewed as devices designed to efficiently perform repetitive tasks, mainly in industrial settings and logistical operations. However, with the advancement of AI, robots increasingly take on new roles. Modern robots can understand and adapt to their surroundings, paving the way for mobile robotics. The human-machine interface (HMI) plays a vital role in the control of mobile robots, as these robots are not yet capable of fully autonomous operation in open-ended environments (e.g., Endsley, 2017; Ezenkwu & Starkey, 2019; Hatanaka et al., 2023; Pianca & Santucci, 2023).
- Europe > Netherlands > South Holland > Delft (0.04)
- North America > United States > Colorado > Boulder County > Boulder (0.04)
- Europe > Germany > Hamburg (0.04)
- (9 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Health & Medicine > Therapeutic Area (0.68)
- Health & Medicine > Consumer Health (0.66)
- Information Technology > Robotics & Automation (0.46)
- Government > Military (0.46)
Microsoft announces layoffs and restructuring in its mixed reality division
Microsoft is laying off employees working on mixed reality as part of a restructuring of the division, CNBC has reported. The company will continue to sell the HoloLens 2 augmented reality (AR) headset, a key device produced by that department. "Earlier today we announced a restructuring of the Microsoft's Mixed Reality organization," a spokesperson told CNBC in an email. "We remain fully committed to the Department of Defense's IVAS program and will continue to deliver cutting edge technology to support our soldiers. In addition, we will continue to invest in W365 to reach the broader Mixed Reality hardware ecosystem. We will continue to sell HoloLens 2 while supporting existing HoloLens 2 customers and partners."
- Government > Regional Government > North America Government > United States Government (0.61)
- Government > Military > Army (0.58)
SIGMA: An Open-Source Interactive System for Mixed-Reality Task Assistance Research
Bohus, Dan, Andrist, Sean, Saw, Nick, Paradiso, Ann, Chakraborty, Ishani, Rad, Mahdi
We introduce an open-source system called SIGMA (short for "Situated Interactive Guidance, Monitoring, and Assistance") as a platform for conducting research on task-assistive agents in mixed-reality scenarios. The system leverages the sensing and rendering affordances of a head-mounted mixed-reality device in conjunction with large language and vision models to guide users step by step through procedural tasks. We present the system's core capabilities, discuss its overall design and implementation, and outline directions for future research enabled by the system. SIGMA is easily extensible and provides a useful basis for future research at the intersection of mixed reality and AI. By open-sourcing an end-to-end implementation, we aim to lower the barrier to entry, accelerate research in this space, and chart a path towards community-driven end-to-end evaluation of large language, vision, and multimodal models in the context of real-world interactive applications.
- North America > United States > Hawaii (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Europe > Middle East > Malta (0.04)
- (2 more...)
- Workflow (0.93)
- Research Report (0.64)
- Health & Medicine (0.68)
- Education (0.46)
Semi-Automatic Infrared Calibration for Augmented Reality Systems in Surgery
Iqbal, Hisham, Baena, Ferdinando Rodriguez y
Augmented reality (AR) has the potential to improve the immersion and efficiency of computer-assisted orthopaedic surgery (CAOS) by allowing surgeons to maintain focus on the operating site rather than external displays in the operating theatre. Successful deployment of AR to CAOS requires a calibration that can accurately calculate the spatial relationship between real and holographic objects. Several studies attempt this calibration through manual alignment or with additional fiducial markers in the surgical scene. We propose a calibration system that offers a direct method for the calibration of AR head-mounted displays (HMDs) with CAOS systems, by using infrared-reflective marker-arrays widely used in CAOS. In our fast, user-agnostic setup, a HoloLens 2 detected the pose of marker arrays using infrared response and time-of-flight depth obtained through sensors onboard the HMD. Registration with a commercially available CAOS system was achieved when an IR marker-array was visible to both devices. Study tests found relative-tracking mean errors of 2.03 mm and 1.12{\deg} when calculating the relative pose between two static marker-arrays at short ranges. When using the calibration result to provide in-situ holographic guidance for a simulated wire-insertion task, a pre-clinical test reported mean errors of 2.07 mm and 1.54{\deg} when compared to a pre-planned trajectory.
- Research Report (1.00)
- Workflow (0.95)
- Health & Medicine > Health Care Technology (1.00)
- Health & Medicine > Surgery (0.68)
On the Fly Robotic-Assisted Medical Instrument Planning and Execution Using Mixed Reality
Ai, Letian, Liu, Yihao, Armand, Mehran, Kheradmand, Amir, Martin-Gomez, Alejandro
Robotic-assisted medical systems (RAMS) have gained significant attention for their advantages in alleviating surgeons' fatigue and improving patients' outcomes. These systems comprise a range of human-computer interactions, including medical scene monitoring, anatomical target planning, and robot manipulation. However, despite its versatility and effectiveness, RAMS demands expertise in robotics, leading to a high learning cost for the operator. In this work, we introduce a novel framework using mixed reality technologies to ease the use of RAMS. The proposed framework achieves real-time planning and execution of medical instruments by providing 3D anatomical image overlay, human-robot collision detection, and robot programming interface. These features, integrated with an easy-to-use calibration method for head-mounted display, improve the effectiveness of human-robot interactions. To assess the feasibility of the framework, two medical applications are presented in this work: 1) coil placement during transcranial magnetic stimulation and 2) drill and injector device positioning during femoroplasty. Results from these use cases demonstrate its potential to extend to a wider range of medical scenarios.
- North America > United States > Maryland > Baltimore (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > Canada > Ontario (0.04)
- (4 more...)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Health Care Technology (1.00)
- Health & Medicine > Diagnostic Medicine (0.93)
MRNaB: Mixed Reality-based Robot Navigation Interface using Optical-see-through MR-beacon
Iglesius, Eduardo, Kobayashi, Masato, Uranishi, Yuki, Takemura, Haruo
Recent advancements in robotics have led to the development of numerous interfaces to enhance the intuitiveness of robot navigation. However, the reliance on traditional 2D displays imposes limitations on the simultaneous visualization of information. Mixed Reality (MR) technology addresses this issue by enhancing the dimensionality of information visualization, allowing users to perceive multiple pieces of information concurrently. This paper proposes Mixed reality-based robot navigation interface using an optical-see-through MR-beacon (MRNaB), a novel approach that incorporates an MR-beacon, situated atop the real-world environment, to function as a signal transmitter for robot navigation. This MR-beacon is designed to be persistent, eliminating the need for repeated navigation inputs for the same location. Our system is mainly constructed into four primary functions: "Add", "Move", "Delete", and "Select". These allow for the addition of a MR-beacon, location movement, its deletion, and the selection of MR-beacon for navigation purposes, respectively. The effectiveness of the proposed method was then validated through experiments by comparing it with the traditional 2D system. As the result, MRNaB was proven to increase the performance of the user when doing navigation to a certain place subjectively and objectively. For additional material, please check: https://mertcookimg.github.io/mrnab
- Asia > Japan > Honshū > Kansai > Osaka Prefecture > Osaka (0.04)
- North America > United States (0.04)
Detection and Pose Estimation of flat, Texture-less Industry Objects on HoloLens using synthetic Training
Pöllabauer, Thomas, Rücker, Fabian, Franek, Andreas, Gorschlüter, Felix
Current state-of-the-art 6d pose estimation is too compute intensive to be deployed on edge devices, such as Microsoft HoloLens (2) or Apple iPad, both used for an increasing number of augmented reality applications. The quality of AR is greatly dependent on its capabilities to detect and overlay geometry within the scene. We propose a synthetically trained client-server-based augmented reality application, demonstrating state-of-the-art object pose estimation of metallic and texture-less industry objects on edge devices. Synthetic data enables training without real photographs, i.e. for yet-to-be-manufactured objects. Our qualitative evaluation on an AR-assisted sorting task, and quantitative evaluation on both renderings, as well as real-world data recorded on HoloLens 2, sheds light on its real-world applicability.
- Information Technology > Artificial Intelligence > Vision > Video Understanding (0.85)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.47)
Augmented Reality User Interface for Command, Control, and Supervision of Large Multi-Agent Teams
Regal, Frank, Suarez, Chris, Parra, Fabian, Pryor, Mitch
Multi-agent human-robot teaming allows for the potential to gather information about various environments more efficiently by exploiting and combining the strengths of humans and robots. In industries like defense, search and rescue, first-response, and others alike, heterogeneous human-robot teams show promise to accelerate data collection and improve team safety by removing humans from unknown and potentially hazardous situations. This work builds upon AugRE, an Augmented Reality (AR) based scalable human-robot teaming framework. It enables users to localize and communicate with 50+ autonomous agents. Through our efforts, users are able to command, control, and supervise agents in large teams, both line-of-sight and non-line-of-sight, without the need to modify the environment prior and without requiring users to use typical hardware (i.e. joysticks, keyboards, laptops, tablets, etc.) in the field. The demonstrated work shows early indications that combining these AR-HMD-based user interaction modalities for command, control, and supervision will help improve human-robot team collaboration, robustness, and trust.
- North America > United States > Texas > Travis County > Austin (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > Canada (0.04)
- (2 more...)