traversability
ProTerrain: Probabilistic Physics-Informed Rough Terrain World Modeling
Raja, Golnaz, Agishev, Ruslan, Prágr, Miloš, Pajarinen, Joni, Zimmermann, Karel, Singh, Arun Kumar, Ghabcheloo, Reza
Uncertainty-aware robot motion prediction is crucial for downstream traversability estimation and safe autonomous navigation in unstructured, off-road environments, where terrain is heterogeneous and perceptual uncertainty is high. Most existing methods assume deterministic or spatially independent terrain uncertainties, ignoring the inherent local correlations of 3D spatial data and often producing unreliable predictions. In this work, we introduce an efficient probabilistic framework that explicitly models spatially correlated aleatoric uncertainty over terrain parameters as a probabilistic world model and propagates this uncertainty through a differentiable physics engine for probabilistic trajectory forecasting. By leveraging structured convolutional operators, our approach provides high-resolution multivariate predictions at manageable computational cost. Experimental evaluation on a publicly available dataset shows significantly improved uncertainty estimation and trajectory prediction accuracy over aleatoric uncertainty estimation baselines.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Portugal (0.04)
- Europe > Finland > Pirkanmaa > Tampere (0.04)
- Europe > Estonia > Tartu County > Tartu (0.04)
Safe Active Navigation and Exploration for Planetary Environments Using Proprioceptive Measurements
Jiang, Matthew, Liu, Shipeng, Qian, Feifei
Abstract--Legged robots can sense terrain through force interactions during locomotion, offering more reliable traversability estimates than remote sensing and serving as scouts for guiding wheeled rovers in challenging environments. However, even legged scouts face challenges when traversing highly deformable or unstable terrain. We present Safe Active Exploration for Granular T errain (SAEGT), a navigation framework that enables legged robots to safely explore unknown granular environments using proprioceptive sensing, particularly where visual input fails to capture terrain deformability. SAEGT estimates the safe region and frontier region online from leg-terrain interactions using Gaussian Process regression for traversability assessment, with a reactive controller for real-time safe exploration and navigation. SAEGT demonstrated its ability to safely explore and navigate toward a specified goal using only proprioceptively estimated traversability in simulation.
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- North America > United States > Texas > Montgomery County > The Woodlands (0.04)
- North America > United States > California > Los Angeles County > Pasadena (0.04)
- (2 more...)
ZeST: an LLM-based Zero-Shot Traversability Navigation for Unknown Environments
Gummadi, Shreya, Gasparino, Mateus V., Capezzuto, Gianluca, Becker, Marcelo, Chowdhary, Girish
--The advancement of robotics and autonomous navigation systems hinges on the ability to accurately predict terrain traversability. Traditional methods for generating datasets to train these prediction models often involve putting robots into potentially hazardous environments, posing risks to equipment and safety. T o solve this problem, we present ZeST, a novel approach leveraging visual reasoning capabilities of Large Language Models (LLMs) to create a traversability map in real-time without exposing robots to danger . Our approach not only performs zero-shot traversability and mitigates the risks associated with real-world data collection but also accelerates the development of advanced navigation systems, offering a cost-effective and scalable solution. T o support our findings, we present navigation results, in both controlled indoor and unstructured outdoor environments. As shown in the experiments, our method provides safer navigation when compared to other state-of-the-art methods, constantly reaching the final goal. The development of autonomous navigation systems is a cornerstone of robotics, with terrain traversability prediction being a critical component [1], [2], [3], [4], [5]. Traversability prediction refers to the ability of a robot to assess whether a given terrain is passable or poses risks to its operation.
- North America > United States > Illinois (0.04)
- South America > Brazil (0.04)
- Asia > Middle East > Republic of Türkiye > Karaman Province > Karaman (0.04)
- Research Report > Promising Solution (0.68)
- Research Report > New Finding (0.48)
Mars Traversability Prediction: A Multi-modal Self-supervised Approach for Costmap Generation
Xie, Zongwu, Yun, Kaijie, Liu, Yang, Ji, Yiming, Li, Han
We present a robust multi-modal framework for predicting traversability costmaps for planetary rovers. Our model fuses camera and LiDAR data to produce a bird's-eye-view (BEV) terrain costmap, trained self-supervised using IMU-derived labels. Key updates include a DINOv3-based image encoder, FiLM-based sensor fusion, and an optimization loss combining Huber and smoothness terms. Experimental ablations (removing image color, occluding inputs, adding noise) show only minor changes in MAE/MSE (e.g. MAE increases from ~0.0775 to 0.0915 when LiDAR is sparsified), indicating that geometry dominates the learned cost and the model is highly robust. We attribute the small performance differences to the IMU labeling primarily reflecting terrain geometry rather than semantics and to limited data diversity. Unlike prior work claiming large gains, we emphasize our contributions: (1) a high-fidelity, reproducible simulation environment; (2) a self-supervised IMU-based labeling pipeline; and (3) a strong multi-modal BEV costmap prediction model. We discuss limitations and future work such as domain generalization and dataset expansion.
TANGO: Traversability-Aware Navigation with Local Metric Control for Topological Goals
Podgorski, Stefan, Garg, Sourav, Hosseinzadeh, Mehdi, Mares, Lachlan, Dayoub, Feras, Reid, Ian
Visual navigation in robotics traditionally relies on globally-consistent 3D maps or learned controllers, which can be computationally expensive and difficult to generalize across diverse environments. In this work, we present a novel RGB-only, object-level topometric navigation pipeline that enables zero-shot, long-horizon robot navigation without requiring 3D maps or pre-trained controllers. Our approach integrates global topological path planning with local metric trajectory control, allowing the robot to navigate towards object-level sub-goals while avoiding obstacles. We address key limitations of previous methods by continuously predicting local trajectory using monocular depth and traversability estimation, and incorporating an auto-switching mechanism that falls back to a baseline controller when necessary. The system operates using foundational models, ensuring open-set applicability without the need for domain-specific fine-tuning. We demonstrate the effectiveness of our method in both simulated environments and real-world tests, highlighting its robustness and deployability. Our approach outperforms existing state-of-the-art methods, offering a more adaptable and effective solution for visual navigation in open-set environments. The source code is made publicly available: https://github.com/podgorki/TANGO.
- Oceania > Australia > South Australia > Adelaide (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Asia > Middle East > UAE (0.04)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
Scene-Agnostic Traversability Labeling and Estimation via a Multimodal Self-supervised Framework
Fang, Zipeng, Wang, Yanbo, Zhao, Lei, Chen, Weidong
Traversability estimation is critical for enabling robots to navigate across diverse terrains and environments. While recent self-supervised learning methods achieve promising results, they often fail to capture the characteristics of non-traversable regions. Moreover, most prior works concentrate on a single modality, overlooking the complementary strengths offered by integrating heterogeneous sensory modalities for more robust traversability estimation. To address these limitations, we propose a multimodal self-supervised framework for traversability labeling and estimation. First, our annotation pipeline integrates footprint, LiDAR, and camera data as prompts for a vision foundation model, generating traversability labels that account for both semantic and geometric cues. Then, leveraging these labels, we train a dual-stream network that jointly learns from different modalities in a decoupled manner, enhancing its capacity to recognize diverse traversability patterns. In addition, we incorporate sparse LiDAR-based supervision to mitigate the noise introduced by pseudo labels. Finally, extensive experiments conducted across urban, off-road, and campus environments demonstrate the effectiveness of our approach. The proposed automatic labeling method consistently achieves around 88% IoU across diverse datasets. Compared to existing self-supervised state-of-the-art methods, our multimodal traversability estimation network yields consistently higher IoU, improving by 1.6-3.5% on all evaluated datasets.
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.68)
Capsizing-Guided Trajectory Optimization for Autonomous Navigation with Rough Terrain
Zhang, Wei, Wang, Yinchuan, Lu, Wangtao, Zhang, Pengyu, Zhang, Xiang, Wang, Yue, Wang, Chaoqun
It is a challenging task for ground robots to autonomously navigate in harsh environments due to the presence of non-trivial obstacles and uneven terrain. This requires trajectory planning that balances safety and efficiency. The primary challenge is to generate a feasible trajectory that prevents robot from tip-over while ensuring effective navigation. In this paper, we propose a capsizing-aware trajectory planner (CAP) to achieve trajectory planning on the uneven terrain. The tip-over stability of the robot on rough terrain is analyzed. Based on the tip-over stability, we define the traversable orientation, which indicates the safe range of robot orientations. This orientation is then incorporated into a capsizing-safety constraint for trajectory optimization. We employ a graph-based solver to compute a robust and feasible trajectory while adhering to the capsizing-safety constraint. Extensive simulation and real-world experiments validate the effectiveness and robustness of the proposed method. The results demonstrate that CAP outperforms existing state-of-the-art approaches, providing enhanced navigation performance on uneven terrains.
- Asia > China > Ningxia Hui Autonomous Region > Yinchuan (0.04)
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- Asia > China > Shandong Province > Jinan (0.04)
MOSU: Autonomous Long-range Robot Navigation with Multi-modal Scene Understanding
Liang, Jing, Weerakoon, Kasun, Song, Daeun, Kirubaharan, Senthurbavan, Xiao, Xuesu, Manocha, Dinesh
We present MOSU, a novel autonomous long-range navigation system that enhances global navigation for mobile robots through multimodal perception and on-road scene understanding. MOSU addresses the outdoor robot navigation challenge by integrating geometric, semantic, and contextual information to ensure comprehensive scene understanding. The system combines GPS and QGIS map-based routing for high-level global path planning and multi-modal trajectory generation for local navigation refinement. For trajectory generation, MOSU leverages multi-modalities: LiDAR-based geometric data for precise obstacle avoidance, image-based semantic segmentation for traversability assessment, and Vision-Language Models (VLMs) to capture social context and enable the robot to adhere to social norms in complex environments. This multi-modal integration improves scene understanding and enhances traversability, allowing the robot to adapt to diverse outdoor conditions. We evaluate our system in real-world on-road environments and benchmark it on the GND dataset, achieving a 10% improvement in traversability on navigable terrains while maintaining a comparable navigation distance to existing global navigation methods.
- North America > United States > Maryland > Prince George's County > College Park (0.14)
- North America > United States > Virginia > Fairfax County > Fairfax (0.04)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Planning & Scheduling (0.70)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.46)
I Move Therefore I Learn: Experience-Based Traversability in Outdoor Robotics
de Miguel, Miguel Ángel, Beltrán, Jorge, Cely, Juan S., Martín, Francisco, Manzanares, Juan Carlos, García, Alberto
Accurate traversability estimation is essential for safe and effective navigation of outdoor robots operating in complex environments. This paper introduces a novel experience-based method that allows robots to autonomously learn which terrains are traversable based on prior navigation experience, without relying on extensive pre-labeled datasets. The approach integrates elevation and texture data into multi-layered grid maps, which are processed using a variational autoencoder (VAE) trained on a generic texture dataset. During an initial teleoperated phase, the robot collects sensory data while moving around the environment. These experiences are encoded into compact feature vectors and clustered using the BIRCH algorithm to represent traversable terrain areas efficiently. In deployment, the robot compares new terrain patches to its learned feature clusters to assess traversability in real time. The proposed method does not require training with data from the targeted scenarios, generalizes across diverse surfaces and platforms, and dynamically adapts as new terrains are encountered. Extensive evaluations on both synthetic benchmarks and real-world scenarios with wheeled and legged robots demonstrate its effectiveness, robustness, and superior adaptability compared to state-of-the-art approaches.
Traversability-aware path planning in dynamic environments
Marchukov, Yaroslav, Montano, Luis
Planning in environments with moving obstacles remains a significant challenge in robotics. While many works focus on navigation and path planning in obstacle-dense spaces, traversing such congested regions is often avoidable by selecting alternative routes. This paper presents Traversability-aware FMM ( Tr-FMM), a path planning method that computes paths in dynamic environments, avoiding crowded regions. The method operates in two steps: first, it dis-cretizes the environment, identifying regions and their distribution; second, it evaluates the traversability of regions, aiming to minimize both obstacle risks and goal deviation. The path is then computed by propagating the wavefront through regions with higher traversability. Simulated and real-world experiments demonstrate that the approach ensures significant safety by keeping the robot away from obstacles while minimizing excessive goal deviations. Introduction Robots operating without direct human supervision or intervention in everyday life are becoming increasingly common. Consequently, moving in spaces shared with humans emerged as a significant challenge in robotics [1]. Typical examples of such environments include indoor settings like stores, warehouses, and airports [2]. In these crowded or busy environments, people often move unpredictably or without paying sufficient attention to robots, potentially leading to collisions or deadlock situations from which a robot cannot recover [3]. Therefore, it is crucial that robots are capable of avoiding such situations, where people are seen as dynamic obstacles needed to be avoided. Classic and widely used navigation techniques, such as DW A [4] and elastic bands [5], struggle in the aforementioned situations. DW A is designed for static scenarios, while elastic bands are not suited for highly dynamic and crowded environments. Navigation methods that account for dynamic obstacles, such as VO [6], RVO [7], and ORCA [8], are designed as local planners for maneuvering among people or moving obstacles, rather than as global planners for such scenarios. More recent approaches, often based on advanced learning techniques [9][10][11], demonstrate higher success rates in avoiding collisions. All these techniques are most useful when the robot is already inside a crowd or has no choice but to pass through one, accepting the potential risk of collision.
- North America > United States > Iowa (0.04)
- Europe > Spain > Aragón > Zaragoza Province > Zaragoza (0.04)
- Asia > Middle East > Republic of Türkiye > Karaman Province > Karaman (0.04)