Goto

Collaborating Authors

 dead reckoning


Good Weights: Proactive, Adaptive Dead Reckoning Fusion for Continuous and Robust Visual SLAM

Du, Yanwei, Peng, Jing-Chen, Vela, Patricio A.

arXiv.org Artificial Intelligence

Abstract-- Given that Visual SLAM relies on appearance cues for localization and scene understanding, texture-less or visually degraded environments (e.g., plain walls or low lighting) lead to poor pose estimation and track loss. However, robots are typically equipped with sensors that provide some form of dead reckoning odometry with reasonable short-time performance but unreliable long-time performance. The Good W eights (GW) algorithm described here provides a framework to adaptively integrate dead reckoning (DR) with passive visual SLAM for continuous and accurate frame-level pose estimation. Importantly, it describes how all modules in a comprehensive SLAM system must be modified to incorporate DR into its design. Adaptive weighting increases DR influence when visual tracking is unreliable and reduces when visual feature information is strong, maintaining pose track without overreliance on DR. Good W eights yields a practical solution for mobile navigation that improves visual SLAM performance and robustness. Experiments on collected datasets and in real-world deployment demonstrate the benefits of Good W eights. Keywords: Visual SLAM, dead reckoning, feature tracking, optimization Visual Simultaneous Localization and Mapping (SLAM) is often formulated as a nonlinear least-squares problem, where camera poses and 3D landmarks are jointly estimated from visual observations [1]-[3]. Optimization accuracy and stability depends on the sufficiency and reliability of feature associations across frames, short-term and long-term.


Quadrotor Neural Dead Reckoning in Periodic Trajectories

Massas, Shira, Klein, Itzik

arXiv.org Artificial Intelligence

In real world scenarios, due to environmental or hardware constraints, the quadrotor is forced to navigate in pure inertial navigation mode while operating indoors or outdoors. To mitigate inertial drift, end-to-end neural network approaches combined with quadrotor periodic trajectories were suggested. There, the quadrotor distance is regressed and combined with inertial model-based heading estimation, the quadrotor position vector is estimated. To further enhance positioning performance, in this paper we propose a quadrotor neural dead reckoning approach for quadrotors flying on periodic trajectories. In this case, the inertial readings are fed into a simple and efficient network to directly estimate the quadrotor position vector. Our approach was evaluated on two different quadrotors, one operating indoors while the other outdoors. Our approach improves the positioning accuracy of other deep-learning approaches, achieving an average 27% reduction in error outdoors and an average 79% reduction indoors, while requiring only software modifications. With the improved positioning accuracy achieved by our method, the quadrotor can seamlessly perform its tasks.


Deep Learning Assisted Inertial Dead Reckoning and Fusion

Hurwitz, Dror, Cohen, Nadav, Klein, Itzik

arXiv.org Artificial Intelligence

The interest in mobile platforms across a variety of applications has increased significantly in recent years. One of the reasons is the ability to achieve accurate navigation by using low-cost sensors. To this end, inertial sensors are fused with global navigation satellite systems (GNSS) signals. GNSS outages during platform operation can result in pure inertial navigation, causing the navigation solution to drift. In such situations, periodic trajectories with dedicated algorithms were suggested to mitigate the drift. With periodic dynamics, inertial deep learning approaches can capture the motion more accurately and provide accurate dead-reckoning for drones and mobile robots. In this paper, we propose approaches to extend deep learning-assisted inertial sensing and fusion capabilities during periodic motion. We begin by demonstrating that fusion between GNSS and inertial sensors in periodic trajectories achieves better accuracy compared to straight-line trajectories. Next, we propose an empowered network architecture to accurately regress the change in distance of the platform. Utilizing this network, we drive a hybrid approach for a neural-inertial fusion filter. Finally, we utilize this approach for situations when GNSS is available and show its benefits. A dataset of 337 minutes of data collected from inertial sensors mounted on a mobile robot and a quadrotor is used to evaluate our approaches.


SwarMer: A Decentralized Localization Framework for Flying Light Specks

Alimohammadzadeh, Hamed, Ghandeharizadeh, Shahram

arXiv.org Artificial Intelligence

Swarm-Merging, SwarMer, is a decentralized framework to localize Flying Light Specks (FLSs) to render 2D and 3D shapes. An FLS is a miniature sized drone equipped with one or more light sources to generate different colors and textures with adjustable brightness. It is battery powered, network enabled with storage and processing capability to implement a decentralized algorithm such as SwarMer. An FLS is unable to render a shape by itself. SwarMer uses the inter-FLS relationship effect of its organizational framework to compensate for the simplicity of each individual FLS, enabling a swarm of cooperating FLSs to render complex shapes. SwarMer is resilient to both FLSs failing and FLSs leaving to charge their battery. It is fast, highly accurate, and scales to remain effective when a shape consists of a large number of FLSs.


'Mission: Impossible--Dead Reckoning' Is the Perfect AI Panic Movie

WIRED

American action movie villains have always acted as a sort of paranoia litmus test, capturing a snapshot of the particular anxieties plaguing the country and its citizens at any given time. In the 1990s and '00s, with the Red Menace long forgotten, movies leaned heavily on the awful "bad Arab" trope, pulling their villains from the Middle East. Other recent smash-'em-ups have made bad guys out of rogue spies, shadowy cyber terrorists, and self-interested arms dealers, all common players in the global news landscape. But for Mission: Impossible--Dead Reckoning Part One, out this week, writers Bruce Geller, Erik Jendresen, and Christopher McQuarrie (who also directed the movie) made their big bad--known as The Entity--out of a slightly more amorphous fear: that of an all-powerful, all-seeing, sentient AI. It has access to anything with an online network and can use those evil techno powers to manipulate everything from global military superpowers to a grandma with a gun.


The New em Mission: Impossible /em Marks the Triumphant Return of Cinema's Greatest Special Effect

Slate

A year after saving the summer box office with the smash hit Top Gun: Maverick, Tom Cruise is back for another round of speedy-motorcycle riding, choppy-handed running, and look-Ma-no-CGI stuntwork in Mission: Impossible--Dead Reckoning Part One, the seventh and supposedly penultimate entry in the now 27-year-old action franchise. In the able hands of Christopher McQuarrie, who has directed the past three M:I movies in addition to writing or co-writing the past four, Dead Reckoning displays the serene if at times demented confidence of a series that's found its voice. Even at 163 minutes, it somehow moves with the no-nonsense briskness of a good airport thriller. To be clear, there is some nonsense involved: Dead Reckoning's plot hinges on an espionage-related MacGuffin so technologically advanced it might as well be magical. And there are several of the franchise's time-honored and much-memed "mask reveals," in which a character suddenly rips off their own face to reveal another cast member underneath.

  Country: Asia > Middle East > UAE > Dubai Emirate > Dubai (0.05)
  Industry:

Learning Position From Vehicle Vibration Using an Inertial Measurement Unit

Or, Barak, Segol, Nimrod, Eweida, Areej, Freydin, Maxim

arXiv.org Artificial Intelligence

This paper presents a novel approach to vehicle positioning that operates without reliance on the global navigation satellite system (GNSS). Traditional GNSS approaches are vulnerable to interference in certain environments, rendering them unreliable in situations such as urban canyons, under flyovers, or in low reception areas. This study proposes a vehicle positioning method based on learning the road signature from accelerometer and gyroscope measurements obtained by an inertial measurement unit (IMU) sensor. In our approach, the route is divided into segments, each with a distinct signature that the IMU can detect through the vibrations of a vehicle in response to subtle changes in the road surface. The study presents two different data-driven methods for learning the road segment from IMU measurements. One method is based on convolutional neural networks and the other on ensemble random forest applied to handcrafted features. Additionally, the authors present an algorithm to deduce the position of a vehicle in real-time using the learned road segment. The approach was applied in two positioning tasks: (i) a car along a 6[km] route in a dense urban area; (ii) an e-scooter on a 1[km] route that combined road and pavement surfaces. The mean error between the proposed method's position and the ground truth was approximately 50[m] for the car and 30[m] for the e-scooter. Compared to a solution based on time integration of the IMU measurements, the proposed approach has a mean error of more than 5 times better for e-scooters and 20 times better for cars.


Dead Reckoning is Still Alive!

#artificialintelligence

Many drivers are highly curious today about the Autonomous Vehicles (AV) dream. Will this dream come true, and when? One of the core technology that needs to be implemented in AV is the inertial navigation system (INS). These systems integrate many sensors together in what we called "sensor fusion" schemes. These sensors include LiDAR, cameras, GPS receivers, Radars, accelerometers, gyroscopes, and many more. The general sensor fusion scheme integrates all the sensors together using a very common algorithm, named the "Kalman Filter", to fuse all sensors optimally (in a mean-squared-error sense).


Cross-view and Cross-domain Underwater Localization based on Optical Aerial and Acoustic Underwater Images

Santos, Matheus M. Dos, De Giacomo, Giovanni G., Drews-Jr, Paulo L. J., Botelho, Silvia S. C.

arXiv.org Artificial Intelligence

Abstract-- Cross-view image matches have been widely explored on terrestrial image localization using aerial images from drones or satellites. This study expands the cross-view image match idea and proposes a cross-domain and cross-view localization framework. The method identifies the correlation between color aerial images and underwater acoustic images to improve the localization of underwater vehicles that travel in partially structured environments such as harbors and marinas. The approach is validated on a real dataset acquired by an underwater vehicle in a marina. The results show an improvement in the localization when compared to the dead reckoning of the vehicle.


Pedestrian Tracking with Gated Recurrent Units and Attention Mechanisms

Elhousni, Mahdi, Huang, Xinming

arXiv.org Machine Learning

Pedestrian tracking has long been considered an important problem, especially in security applications. Previously,many approaches have been proposed with various types of sensors. One popular method is Pedestrian Dead Reckoning(PDR) [1] which is based on the inertial measurement unit(IMU) sensor. However PDR is an integration and threshold based method, which suffers from accumulation errors and low accuracy. In this paper, we propose a novel method in which the sensor data is fed into a deep learning model to predict the displacements and orientations of the pedestrian. We also devise a new apparatus to collect and construct databases containing synchronized IMU sensor data and precise locations measured by a LIDAR. The preliminary results are promising, and we plan to push this forward by collecting more data and adapting the deep learning model for all general pedestrian motions.