navigation skill
Human-like Navigation in a World Built for Humans
Chandaka, Bhargav, Wang, Gloria X., Chen, Haozhe, Che, Henry, Zhai, Albert J., Wang, Shenlong
When navigating in a man-made environment they haven't visited before--like an office building--humans employ behaviors such as reading signs and asking others for directions. These behaviors help humans reach their destinations efficiently by reducing the need to search through large areas. Existing robot navigation systems lack the ability to execute such behaviors and are thus highly inefficient at navigating within large environments. We present ReasonNav, a modular navigation system which integrates these human-like navigation skills by leveraging the reasoning capabilities of a vision-language model (VLM). We design compact input and output abstractions based on navigation landmarks, allowing the VLM to focus on language understanding and reasoning. We evaluate ReasonNav on real and simulated navigation tasks and show that the agent successfully employs higher-order reasoning to navigate efficiently in large, complex buildings.
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.95)
- Information Technology > Artificial Intelligence > Vision (0.89)
Skill Q-Network: Learning Adaptive Skill Ensemble for Mapless Navigation in Unknown Environments
Seong, Hyunki, Shim, David Hyunchul
This paper focuses on the acquisition of mapless navigation skills within unknown environments. We introduce the Skill Q-Network (SQN), a novel reinforcement learning method featuring an adaptive skill ensemble mechanism. Unlike existing methods, our model concurrently learns a high-level skill decision process alongside multiple low-level navigation skills, all without the need for prior knowledge. Leveraging a tailored reward function for mapless navigation, the SQN is capable of learning adaptive maneuvers that incorporate both exploration and goal-directed skills, enabling effective navigation in new environments. Our experiments demonstrate that our SQN can effectively navigate complex environments, exhibiting a 40% higher performance compared to baseline models. Without explicit guidance, SQN discovers how to combine low-level skill policies, showcasing both goal-directed navigations to reach destinations and exploration maneuvers to escape from local minimum regions in challenging scenarios. Remarkably, our adaptive skill ensemble method enables zero-shot transfer to out-of-distribution domains, characterized by unseen observations from non-convex obstacles or uneven, subterranean-like environments.
ASC: Adaptive Skill Coordination for Robotic Mobile Manipulation
Yokoyama, Naoki, Clegg, Alex, Truong, Joanne, Undersander, Eric, Yang, Tsung-Yen, Arnaud, Sergio, Ha, Sehoon, Batra, Dhruv, Rai, Akshara
We present Adaptive Skill Coordination (ASC) -- an approach for accomplishing long-horizon tasks like mobile pick-and-place (i.e., navigating to an object, picking it, navigating to another location, and placing it). ASC consists of three components -- (1) a library of basic visuomotor skills (navigation, pick, place), (2) a skill coordination policy that chooses which skill to use when, and (3) a corrective policy that adapts pre-trained skills in out-of-distribution states. All components of ASC rely only on onboard visual and proprioceptive sensing, without requiring detailed maps with obstacle layouts or precise object locations, easing real-world deployment. We train ASC in simulated indoor environments, and deploy it zero-shot (without any real-world experience or fine-tuning) on the Boston Dynamics Spot robot in eight novel real-world environments (one apartment, one lab, two microkitchens, two lounges, one office space, one outdoor courtyard). In rigorous quantitative comparisons in two environments, ASC achieves near-perfect performance (59/60 episodes, or 98%), while sequentially executing skills succeeds in only 44/60 (73%) episodes. Extensive perturbation experiments show that ASC is robust to hand-off errors, changes in the environment layout, dynamic obstacles (e.g., people), and unexpected disturbances. Supplementary videos at adaptiveskillcoordination.github.io.
- North America > United States > New York > New York County > New York City (0.04)
- Asia > Middle East > Jordan (0.04)
Adaptive and Explainable Deployment of Navigation Skills via Hierarchical Deep Reinforcement Learning
Lee, Kyowoon, Kim, Seongun, Choi, Jaesik
For robotic vehicles to navigate robustly and safely in unseen environments, it is crucial to decide the most suitable navigation policy. However, most existing deep reinforcement learning based navigation policies are trained with a hand-engineered curriculum and reward function which are difficult to be deployed in a wide range of real-world scenarios. In this paper, we propose a framework to learn a family of low-level navigation policies and a high-level policy for deploying them. The main idea is that, instead of learning a single navigation policy with a fixed reward function, we simultaneously learn a family of policies that exhibit different behaviors with a wide range of reward functions. We then train the high-level policy which adaptively deploys the most suitable navigation skill. We evaluate our approach in simulation and the real world and demonstrate that our method can learn diverse navigation skills and adaptively deploy them. We also illustrate that our proposed hierarchical learning framework presents explainability by providing semantics for the behavior of an autonomous agent.
- Asia > South Korea > Ulsan > Ulsan (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Asia > South Korea > Daejeon > Daejeon (0.04)
Dimension-variable Mapless Navigation with Deep Reinforcement Learning
Zhang, Wei, Zhang, Yunfeng, Liu, Ning, Ren, Kai
Deep reinforcement learning (DRL) has exhibited considerable promise in the training of control agents for mapless robot navigation. However, DRL-trained agents are limited to the specific robot dimensions used during training, hindering their applicability when the robot's dimension changes for task-specific requirements. To overcome this limitation, we propose a dimension-variable robot navigation method based on DRL. Our approach involves training a meta agent in simulation and subsequently transferring the meta skill to a dimension-varied robot using a technique called dimension-variable skill transfer (DVST). During the training phase, the meta agent for the meta robot learns self-navigation skills with DRL. In the skill-transfer phase, observations from the dimension-varied robot are scaled and transferred to the meta agent, and the resulting control policy is scaled back to the dimension-varied robot. Through extensive simulated and real-world experiments, we demonstrated that the dimension-varied robots could successfully navigate in unknown and dynamic environments without any retraining. The results show that our work substantially expands the applicability of DRL-based navigation methods, enabling them to be used on robots with different dimensions without the limitation of a fixed dimension. The video of our experiments can be found in the supplementary file.
- Asia > Singapore (0.04)
- North America > Puerto Rico > San Juan > San Juan (0.04)
- Asia > China (0.04)
Drones navigate unseen environments with liquid neural networks
Makram Chahine, a PhD student in electrical engineering and computer science and an MIT CSAIL affiliate, leads a drone used to test liquid neural networks. In the vast, expansive skies where birds once ruled supreme, a new crop of aviators is taking flight. These pioneers of the air are not living creatures, but rather a product of deliberate innovation: drones. Rather, they're avian-inspired marvels that soar through the sky, guided by liquid neural networks to navigate ever-changing and unseen environments with precision and ease. Inspired by the adaptable nature of organic brains, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have introduced a method for robust flight navigation agents to master vision-based fly-to-target tasks in intricate, unfamiliar environments.
Multi-skill Mobile Manipulation for Object Rearrangement
Gu, Jiayuan, Chaplot, Devendra Singh, Su, Hao, Malik, Jitendra
We study a modular approach to tackle long-horizon mobile manipulation tasks for object rearrangement, which decomposes a full task into a sequence of subtasks. To tackle the entire task, prior work chains multiple stationary manipulation skills with a point-goal navigation skill, which are learned individually on subtasks. Although more effective than monolithic end-to-end RL policies, this framework suffers from compounding errors in skill chaining, e.g., navigating to a bad location where a stationary manipulation skill can not reach its target to manipulate. To this end, we propose that the manipulation skills should include mobility to have flexibility in interacting with the target object from multiple locations and at the same time the navigation skill could have multiple end points which lead to successful manipulation. We operationalize these ideas by implementing mobile manipulation skills rather than stationary ones and training a navigation skill trained with region goal instead of point goal. We evaluate our multi-skill mobile manipulation method M3 on 3 challenging long-horizon mobile manipulation tasks in the Home Assistant Benchmark (HAB), and show superior performance as compared to the baselines.
- Information Technology > Artificial Intelligence > Robots > Robot Planning & Action (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Planning & Scheduling (0.46)
Sat-navs are robbing us of our sense of direction: Expert warns we risk losing our navigation skills if we keep relying on GPS
Earlier this year a US tourist in Iceland drove 226 miles (364km) too far because he was following his sat nav. While tourists in Wales looking for the beautiful falls in Neath Valley often end up in a nearby cul-de-sac because the two locations share the same postcode. Although these stories are amusing, for everyone apart from those involved, they represent a worrying trend of people'over-relying' on GPS. Now an expert warns that it is not only making us lazy, this dependence could rob us of our innate navigation skills. Our increasing reliance on satellite navigation is coming at a cost and is harming our own ability to navigate, says satellite communication and navigation consultant Roger McKinlay.
- Europe > United Kingdom > Wales (0.25)
- Europe > Italy (0.05)
- Europe > Iceland > Southern Peninsula Region > Keflavik (0.05)
- (2 more...)