Goto

Collaborating Authors

 manual mode


CoNav Chair: Development and Evaluation of a Shared Control based Wheelchair for the Built Environment

Xu, Yifan, Wang, Qianwei, Lillie, Jordan, Kamat, Vineet, Menassa, Carol, D'Souza, Clive

arXiv.org Artificial Intelligence

As the global population of people with disabilities (PWD) continues to grow, so will the need for mobility solutions that promote independent living and social integration. Wheelchairs are vital for the mobility of PWD in both indoor and outdoor environments. The current SOTA in powered wheelchairs is based on either manually controlled or fully autonomous modes of operation, offering limited flexibility and often proving difficult to navigate in spatially constrained environments. Moreover, research on robotic wheelchairs has focused predominantly on complete autonomy or improved manual control; approaches that can compromise efficiency and user trust. To overcome these challenges, this paper introduces the CoNav Chair, a smart wheelchair based on the Robot Operating System (ROS) and featuring shared control navigation and obstacle avoidance capabilities that are intended to enhance navigational efficiency, safety, and ease of use for the user. The paper outlines the CoNav Chair's design and presents a preliminary usability evaluation comparing three distinct navigation modes, namely, manual, shared, and fully autonomous, conducted with 21 healthy, unimpaired participants traversing an indoor building environment. Study findings indicated that the shared control navigation framework had significantly fewer collisions and performed comparably, if not superior to the autonomous and manual modes, on task completion time, trajectory length, and smoothness; and was perceived as being safer and more efficient based on user reported subjective assessments of usability. Overall, the CoNav system demonstrated acceptable safety and performance, laying the foundation for subsequent usability testing with end users, namely, PWDs who rely on a powered wheelchair for mobility.


STREAMS: An Assistive Multimodal AI Framework for Empowering Biosignal Based Robotic Controls

Rabiee, Ali, Ghafoori, Sima, Bai, Xiangyu, Ostadabbas, Sarah, Abiri, Reza

arXiv.org Artificial Intelligence

End-effector based assistive robots face persistent challenges in generating smooth and robust trajectories when controlled by human's noisy and unreliable biosignals such as muscle activities and brainwaves. The produced endpoint trajectories are often jerky and imprecise to perform complex tasks such as stable robotic grasping. We propose STREAMS (Self-Training Robotic End-to-end Adaptive Multimodal Shared autonomy) as a novel framework leveraged deep reinforcement learning to tackle this challenge in biosignal based robotic control systems. STREAMS blends environmental information and synthetic user input into a Deep Q Learning Network (DQN) pipeline for an interactive end-to-end and self-training mechanism to produce smooth trajectories for the control of end-effector based robots. The proposed framework achieved a high-performance record of 98% in simulation with dynamic target estimation and acquisition without any pre-existing datasets. As a zero-shot sim-to-real user study with five participants controlling a physical robotic arm with noisy head movements, STREAMS (as an assistive mode) demonstrated significant improvements in trajectory stabilization, user satisfaction, and task performance reported as a success rate of 83% compared to manual mode which was 44% without any task support. STREAMS seeks to improve biosignal based assistive robotic controls by offering an interactive, end-to-end solution that stabilizes end-effector trajectories, enhancing task performance and accuracy.


DJI Neo hands-on: A powerful and lightweight 200 drone

Engadget

DJI has just unveiled the Neo, its much-leaked 200 drone aimed at content creators and casual users. It's tiny and easy to use thanks to novice-friendly features like propeller guards, palm takeoff and voice control. However, the Neo is no toy (or Snap Pixy). It has a suite of powerful features like ActiveTrack, Quick Shots, FPV controller support, smartphone control and the ability to record yourself with the DJI Mic 2. Video specs look promising as well, but not everything is perfect -- it lacks obstacle detection and uses small propellers that are likely to be noisy. I wasn't able to give it a full look as some features were missing, but I was still astonished by what DJI got a small, cheap drone to do. The Neo is DJI's lightest drone by a long way at 135 grams and is nearly small enough to fit into a pocket.


Disturbance Injection under Partial Automation: Robust Imitation Learning for Long-horizon Tasks

Tahara, Hirotaka, Sasaki, Hikaru, Oh, Hanbit, Anarossi, Edgar, Matsubara, Takamitsu

arXiv.org Artificial Intelligence

Partial Automation (PA) with intelligent support systems has been introduced in industrial machinery and advanced automobiles to reduce the burden of long hours of human operation. Under PA, operators perform manual operations (providing actions) and operations that switch to automatic/manual mode (mode-switching). Since PA reduces the total duration of manual operation, these two action and mode-switching operations can be replicated by imitation learning with high sample efficiency. To this end, this paper proposes Disturbance Injection under Partial Automation (DIPA) as a novel imitation learning framework. In DIPA, mode and actions (in the manual mode) are assumed to be observables in each state and are used to learn both action and mode-switching policies. The above learning is robustified by injecting disturbances into the operator's actions to optimize the disturbance's level for minimizing the covariate shift under PA. We experimentally validated the effectiveness of our method for long-horizon tasks in two simulations and a real robot environment and confirmed that our method outperformed the previous methods and reduced the demonstration burden.


Into-TTS : Intonation Template Based Prosody Control System

Lee, Jihwan, Lee, Joun Yeop, Choi, Heejin, Mun, Seongkyu, Park, Sangjun, Bae, Jae-Sung, Kim, Chanwoo

arXiv.org Artificial Intelligence

Intonations play an important role in delivering the intention of a speaker. However, current end-to-end TTS systems often fail to model proper intonations. To alleviate this problem, we propose a novel, intuitive method to synthesize speech in different intonations using predefined intonation templates. Prior to TTS model training, speech data are grouped into intonation templates in an unsupervised manner. Two proposed modules are added to the end-to-end TTS framework: an intonation predictor and an intonation encoder. The intonation predictor recommends a suitable intonation template to the given text. The intonation encoder, attached to the text encoder output, synthesizes speech abiding the requested intonation template. Main contributions of our paper are: (a) an easy-to-use intonation control system covering a wide range of users; (b) better performance in wrapping speech in a requested intonation with improved objective and subjective evaluation; and (c) incorporating a pre-trained language model for intonation modelling. Audio samples are available at https://srtts.github.io/IntoTTS.


Teaming up with information agents

van Diggelen, Jurriaan, Jorritsma, Wiard, van der Vecht, Bob

arXiv.org Artificial Intelligence

Despite the intricacies involved in designing a computer as a teampartner, we can observe patterns in team behavior which allow us to describe at a general level how AI systems are to collaborate with humans. Whereas most work on human-machine teaming has focused on physical agents (e.g. robotic systems), our aim is to study how humans can collaborate with information agents. We propose some appropriate team design patterns, and test them using our Collaborative Intelligence Analysis (CIA) tool.


Honda's 'augmented driving' concept toggles between autonomous and manual by watching your eyes

Daily Mail - Science & tech

While many automakers are in a rush to nix traditional driving in favor of fully autonomous vehicles, Honda is holding on tight to the steering wheel in a new'augmented' experience that blends the best both worlds. The concept, which is on display at CES in Las Vegas, combines several novel driving technologies that are designed to help drivers seamlessly switch between manual and autonomous modes, including a moveable steering wheel that doubles as an accelerator and brake. The wheel, turned brake and accelerator, which Honda provided MailOnline a simulated demo of, is controlled by either pulling (braking) or pushing (accelerating) it away from one's body. Honda's augmented driving concept was showcased at CES in Las Vegas and includes several technologies that hope to blend autonomous and manual driving In a virtual demonstration, MailOnline tested out aspects of Honda's high-tech steering wheel that also doubles as an accelerator and brake It's also equipped with sensor around the outer ring that can feel a driver's touch. When the car is in its autonomous state a passenger can swipe their hand left or right over the top of the steering wheel to make it change lanes.


Uber is investing $150M in Toronto to expand self-driving car efforts

#artificialintelligence

Months after an Uber self -driving vehicle struck and killed a pedestrian in Tempe, Arizona, the ride-hailing giant has announced it's adding a new engineering hub in Toronto and expanding its autonomous research team as it refocuses its self-driving car efforts. In his first visit to the Canadian tech hub since becoming CEO of Uber last year, Dara Khosrowshahi announced plans to invest $150 million in Toronto over the next five years. Uber will bring on 300 new employees, bringing the company's total headcount in Toronto to 500. The new engineering hub is expected to open early next year. We've reached out to Uber for comment.


Uber investors demand firm sells its self driving car division

Daily Mail - Science & tech

Investors have told Uber Technologies Inc it would be wise to sell off its self-driving car unit. The calls come after it racked up losses of $125 million to $200 million each quarter for the past 18 months, tech news site The Information reported on Wednesday, citing an unnamed person familiar with the issue. Uber did not immediately respond to a Reuters request for comment. Uber is due to release its second-quarter earnings to investors later on Wednesday. Uber is only just getting its autonomous vehicles back on the road for the first time since one of its driverless cars fatally wounded a pedestrian.


Uber resumes testing for autonomous cars in 'manual mode'

Daily Mail - Science & tech

Uber is getting its autonomous vehicles back on the road for the first time since one of its driverless cars fatally wounded a pedestrian. The latest tests will see the autonomous vehicles operated in'manual mode' – with human drivers behind the wheel operating the vehicle at all times. Although the vehicles will not be navigating independently, the latest round of tests will allow Uber to gather data on a number of scenarios that can be later recreated in computer simulations. The'manual mode' tests will also allow Uber to develop more accurate mapping for the vehicles. Elaine Herzberg, 49, was killed in Arizona on March 18 when an Uber Volvo SUV failed to apply the brakes after it registered her stepping into the road to cross.