hololen
Microsoft's wins, fails, and WTF moments of 2024
Microsoft's best and worst of 2024, not surprisingly, centered around AI. When Microsoft was trying to force Copilot upon us, it didn't go so well. But when it used AI to enhance what was already working, the results were much more successful. For the last few years (2021, 2022, and 2023) we've recapped what I call Microsoft's wins, fails, and WTF moments: Microsoft's highs, lows, and the moments where you wondered what in the world that this company was up to. We're not going to try and piece together the company's enterprise strategy (Azure and Copilot, basically).
Engadget Podcast: Why the Windows 11 2024 update is all about Copilot AI
This week, Microsoft started rolling out the Windows 11 2024 update, but it quickly became clear that the company was far more eager to unveil new features for its Copilot AI and Copilot AI PCs. In this episode, Devindra and Cherlynn chat about Microsoft's current AI priorities, and what it means for people with older PCs. Also, we discuss the death of HoloLens and Microsoft giving up on AR as Meta, Apple and even Snap build for an augmented reality future. Listen below or subscribe on your podcast app of choice. If you've got suggestions or topics you'd like covered on the show, be sure to email us or drop a note in the comments! And be sure to check out our other podcast, Engadget News! Tech debt led to Sonos' disastrous app relaunch, will they be able to win users back? Google is making Gmail summaries more useful and adding a "happening soon" tab to your inbox – 41:11 Harvard students hack together facial recognition for Meta's smart glasses that instantly doxes strangers – 44:00 ...
- North America > United States > New York (0.04)
- North America > United States > Minnesota (0.04)
- Personal > Interview (1.00)
- Instructional Material (0.93)
- Media > Film (1.00)
- Information Technology > Services (1.00)
- Information Technology > Security & Privacy (1.00)
- (2 more...)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Artificial Intelligence > Vision > Face Recognition (0.34)
iTeach: Interactive Teaching for Robot Perception using Mixed Reality
P, Jishnu Jaykumar, Salvato, Cole, Bomnale, Vinaya, Wang, Jikai, Xiang, Yu
We introduce iTeach, a Mixed Reality (MR) framework to improve robot perception through real-time interactive teaching. By allowing human instructors to dynamically label robot RGB data, iTeach improves both the accuracy and adaptability of robot perception to new scenarios. The framework supports on-the-fly data collection and labeling, enhancing model performance, and generalization. Applied to door and handle detection for household tasks, iTeach integrates a HoloLens app with an interactive YOLO model. Furthermore, we introduce the IRVLUTD DoorHandle dataset. DH-YOLO, our efficient detection model, significantly enhances the accuracy and efficiency of door and handle detection, highlighting the potential of MR to make robotic systems more capable and adaptive in real-world environments. The project page is available at https://irvlutd.github.io/iTeach.
- Workflow (0.70)
- Research Report (0.50)
A Brain-Computer Interface Augmented Reality Framework with Auto-Adaptive SSVEP Recognition
Mustafa, Yasmine, Elmahallawy, Mohamed, Luo, Tie, Eldawlatly, Seif
Brain-Computer Interface (BCI) initially gained attention for developing applications that aid physically impaired individuals. Recently, the idea of integrating BCI with Augmented Reality (AR) emerged, which uses BCI not only to enhance the quality of life for individuals with disabilities but also to develop mainstream applications for healthy users. One commonly used BCI signal pattern is the Steady-state Visually-evoked Potential (SSVEP), which captures the brain's response to flickering visual stimuli. SSVEP-based BCI-AR applications enable users to express their needs/wants by simply looking at corresponding command options. However, individuals are different in brain signals and thus require per-subject SSVEP recognition. Moreover, muscle movements and eye blinks interfere with brain signals, and thus subjects are required to remain still during BCI experiments, which limits AR engagement. In this paper, we (1) propose a simple adaptive ensemble classification system that handles the inter-subject variability, (2) present a simple BCI-AR framework that supports the development of a wide range of SSVEP-based BCI-AR applications, and (3) evaluate the performance of our ensemble algorithm in an SSVEP-based BCI-AR application with head rotations which has demonstrated robustness to the movement interference. Our testing on multiple subjects achieved a mean accuracy of 80\% on a PC and 77\% using the HoloLens AR headset, both of which surpass previous studies that incorporate individual classifiers and head movements. In addition, our visual stimulation time is 5 seconds which is relatively short. The statistically significant results show that our ensemble classification approach outperforms individual classifiers in SSVEP-based BCIs.
- Africa > Middle East > Egypt > Cairo Governorate > Cairo (0.04)
- North America > United States > Missouri (0.04)
- Research Report > New Finding (0.66)
- Research Report > Experimental Study (0.66)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Health Care Technology (0.87)
Investigating the Usability of Collaborative Robot control through Hands-Free Operation using Eye gaze and Augmented Reality
Lee, Joosun, Lim, Taeyhang, Kim, Wansoo
This paper proposes a novel operation for controlling a mobile robot using a head-mounted device. Conventionally, robots are operated using computers or a joystick, which creates limitations in usability and flexibility because control equipment has to be carried by hand. This lack of flexibility may prevent workers from multitasking or carrying objects while operating the robot. To address this limitation, we propose a hands-free method to operate the mobile robot with a human gaze in an Augmented Reality (AR) environment. The proposed work is demonstrated using the HoloLens 2 to control the mobile robot, Robotnik Summit-XL, through the eye-gaze in AR. Stable speed control and navigation of the mobile robot were achieved through admittance control which was calculated using the gaze position. The experiment was conducted to compare the usability between the joystick and the proposed operation, and the results were validated through surveys (i.e., SUS, SEQ). The survey results from the participants after the experiments showed that the wearer of the HoloLens accurately operated the mobile robot in a collaborative manner. The results for both the joystick and the HoloLens were marked as easy to use with above-average usability. This suggests that the HoloLens can be used as a replacement for the joystick to allow hands-free robot operation and has the potential to increase the efficiency of human-robot collaboration in situations when hands-free controls are needed.
- North America > United States > Hawaii (0.04)
- Europe > Portugal > Azores (0.04)
- Europe > Greece (0.04)
- Research Report > Experimental Study (0.68)
- Research Report > New Finding (0.67)
Learning to Assist and Communicate with Novice Drone Pilots for Expert Level Performance
Backman, Kal, Kulić, Dana, Chung, Hoam
Multi-task missions for unmanned aerial vehicles (UAVs) involving inspection and landing tasks are challenging for novice pilots due to the difficulties associated with depth perception and the control interface. We propose a shared autonomy system, alongside supplementary information displays, to assist pilots to successfully complete multi-task missions without any pilot training. Our approach comprises of three modules: (1) a perception module that encodes visual information onto a latent representation, (2) a policy module that augments pilot's actions, and (3) an information augmentation module that provides additional information to the pilot. The policy module is trained in simulation with simulated users and transferred to the real world without modification in a user study (n=29), alongside supplementary information schemes including learnt red/green light feedback cues and an augmented reality display. The pilot's intent is unknown to the policy module and is inferred from the pilot's input and UAV's states. The assistant increased task success rate for the landing and inspection tasks from [16.67% & 54.29%] respectively to [95.59% & 96.22%]. With the assistant, inexperienced pilots achieved similar performance to experienced pilots. Red/green light feedback cues reduced the required time by 19.53% and trajectory length by 17.86% for the inspection task, where participants rated it as their preferred condition due to the intuitive interface and providing reassurance. This work demonstrates that simple user models can train shared autonomy systems in simulation, and transfer to physical tasks to estimate user intent and provide effective assistance and information to the pilot.
- Questionnaire & Opinion Survey (1.00)
- Research Report > Experimental Study (0.93)
- Research Report > New Finding (0.67)
- Information Technology > Robotics & Automation (1.00)
- Aerospace & Defense (0.87)
- Transportation (0.68)
- Government > Military (0.64)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles > Drones (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
How creepy augmented reality enables seeing through walls
MIT is developing a new headset that would give its user the capability to see through walls. Artificial intelligence is just one small sector of technology that is rapidly growing. The newest tech advancement at MIT now is augmented reality. Researchers are currently working on a device that would help people see beyond walls and other barriers. Although it might sound creepy, it also has some powerful benefits.
Meet the Microsoft graveyard of dead hardware
Rest in peace, Microsoft PC peripherals. You've probably heard of the Google Graveyard, the collection of apps, services and products that Google shut down before their time. But like any big company, Microsoft also tried and failed to make certain products work. In light of Microsoft's decision to discontinue PC peripherals like the Microsoft Sculpt Desktop Keyboard, let's look at some of the products that litter the Microsoft hardware graveyard. Microsoft made RAMCards, one of the first solid-state disks, for both the Apple II as well as the IBM PC in the early 1980s. Instead of non-volatile memory like today's SSDs, however, these were simply more like memory expansion cards, adding 16KB of RAM to an Apple II with 48KB already in place.
Inside-out Infrared Marker Tracking via Head Mounted Displays for Smart Robot Programming
Puljiz, David, Vasilache, Alexandru-George, Mende, Michael, Hein, Björn
Intuitive robot programming through use of tracked smart input devices relies on fixed, external tracking systems, most often employing infra-red markers. Such an approach is frequently combined with projector-based augmented reality for better visualisation and interface. The combined system, although providing an intuitive programming platform with short cycle times even for inexperienced users, is immobile, expensive and requires extensive calibration. When faced with a changing environment and large number of robots it becomes sorely impractical. Here we present our work on infra-red marker tracking using the Microsoft HoloLens head-mounted display. The HoloLens can map the environment, register the robot on-line, and track smart devices equipped with infra-red markers in the robot coordinate system. We envision our work to provide the basis to transfer many of the paradigms developed over the years for systems requiring a projector and a tracked input device into a highly-portable system that does not require any calibration or special set-up. We test the quality of the marker-tracking in an industrial robot cell and compare our tracking with a ground truth obtained via an ART-3 tracking system.
- Europe > Germany > Baden-Württemberg > Karlsruhe Region > Karlsruhe (0.05)
- North America > United States > New Jersey > Middlesex County > Piscataway (0.04)
Brain-Computer Interface Enables Mind Control of Robot Dog
A new peer-reviewed study published in ACS Applied Nano Materials demonstrates a new type of AI-enabled brain-machine interface (BMI) featuring noninvasive biosensor nanotechnology and augmented reality that enables humans to use thoughts to control robots with a high degree of accuracy. Brain-machine interfaces (BMIs) are hands-free and voice-command-free communication systems that allow an individual to operate external devices through brain waves, with vast potential for future robotics, bionic prosthetics, neurogaming, electronics, and autonomous vehicles. The artificial intelligence (AI) renaissance with the improved pattern-recognition capabilities of deep neural networks is contributing to the acceleration of advances in brain-machine interfaces, also known as brain-computing interfaces (BCIs). AI deep learning helps find the relevant signals in the noisy brain activity data. The neural activity of the human brain is recorded using sensors.