Collaborating Authors

Google Maps AR navigation is rolling out to a handful of users


Google Maps' augmented reality navigation is finally rolling out several months after its debut, although you might still have to wait a while. The company told the Wall Street Journal the walking-focused feature will be available shortly, but only to Local Guides (community reviewers) at first. The feature will need "more testing" before it's available to everyone else, Google said. Still, this suggests AR route-finding is much closer to becoming a practical reality. The core functionality remains the same.

Artificial Intelligence: Taking driverless navigation up a gear


AI is not only vital in ensuring successful and efficient navigation, but it's a crucial element in ensuring the journey from A to B is as safe and comfortable as possible. The biggest benefit of AI is its ability to boost efficiency and complete complex tasks that cannot be easily managed by humans. When it comes to navigation, this translates to evaluating real-time conditions with optimum route guidance that helps the driver avoid traffic, amongst other road hazards. The implementation of AI into cars, however, is no easy task. When the control over navigation is taken out of the driver's hands, there's a need to ensure that the data the AI is working with is up to code.

Translating Navigation Instructions in Natural Language to a High-Level Plan for Behavioral Robot Navigation Artificial Intelligence

We propose an end-to-end deep learning model for translating free-form natural language instructions to a high-level plan for behavioral robot navigation. The proposed model uses attention mechanisms to connect information from user instructions with a topological representation of the environment. To evaluate this model, we collected a new dataset for the translation problem containing 11,051 pairs of user instructions and navigation plans. Our results show that the proposed model outperforms baseline approaches on the new dataset. Overall, our work suggests that a topological map of the environment can serve as a relevant knowledge base for translating natural language instructions into a sequence of navigation behaviors.

Self-supervised Deep Reinforcement Learning with Generalized Computation Graphs for Robot Navigation Artificial Intelligence

Enabling robots to autonomously navigate complex environments is essential for real-world deployment. Prior methods approach this problem by having the robot maintain an internal map of the world, and then use a localization and planning method to navigate through the internal map. However, these approaches often include a variety of assumptions, are computationally intensive, and do not learn from failures. In contrast, learning-based methods improve as the robot acts in the environment, but are difficult to deploy in the real-world due to their high sample complexity. To address the need to learn complex policies with few samples, we propose a generalized computation graph that subsumes value-based model-free methods and model-based methods, with specific instantiations interpolating between model-free and model-based. We then instantiate this graph to form a navigation model that learns from raw images and is sample efficient. Our simulated car experiments explore the design decisions of our navigation model, and show our approach outperforms single-step and $N$-step double Q-learning. We also evaluate our approach on a real-world RC car and show it can learn to navigate through a complex indoor environment with a few hours of fully autonomous, self-supervised training. Videos of the experiments and code can be found at

Vision-based Navigation Using Deep Reinforcement Learning Artificial Intelligence

Jon a ˇ s Kulh anek 1, Erik Derner 2, Tim de Bruin 1, and Robert Babu ˇ ska 3 Abstract -- Deep reinforcement learning (RL) has been successfully applied to a variety of game-like environments. However, the application of deep RL to visual navigation with realistic environments is a challenging task. We propose a novel learning architecture capable of navigating an agent, e.g. a mobile robot, to a target given by an image. T o achieve this, we have extended the batched A2C algorithm with auxiliary tasks designed to improve visual navigation performance. We propose three additional auxiliary tasks: predicting the segmentation of the observation image and of the target image and predicting the depth-map. These tasks enable the use of supervised learning to pre-train a large part of the network and to reduce the number of training steps substantially. The training performance has been further improved by increasing the environment complexity gradually over time. An efficient neural network structure is proposed, which is capable of learning for multiple targets in multiple environments. Our method navigates in continuous state spaces and on the AI2-THOR environment simulator outperforms state-of-the-art goal-oriented visual navigation methods from the literature. I NTRODUCTION Visual navigation is the problem of navigating an agent, e.g. a mobile robot, in an environment using camera input only. The agent is given a target image (an image it will see from the target position), and its goal is to move from its current position to the target by applying a sequence of actions, based on the camera observations only.