Lawrance, Nicholas
Online Adaptive Traversability Estimation through Interaction for Unstructured, Densely Vegetated Environments
Ruetz, Fabio A., Lawrance, Nicholas, Hernández, Emili, Borges, Paulo V. K., Peynot, Thierry
Navigating densely vegetated environments poses significant challenges for autonomous ground vehicles. Learning-based systems typically use prior and in-situ data to predict terrain traversability but often degrade in performance when encountering out-of-distribution elements caused by rapid environmental changes or novel conditions. This paper presents a novel, lidar-only, online adaptive traversability estimation (TE) method that trains a model directly on the robot using self-supervised data collected through robot-environment interaction. The proposed approach utilises a probabilistic 3D voxel representation to integrate lidar measurements and robot experience, creating a salient environmental model. To ensure computational efficiency, a sparse graph-based representation is employed to update temporarily evolving voxel distributions. Extensive experiments with an unmanned ground vehicle in natural terrain demonstrate that the system adapts to complex environments with as little as 8 minutes of operational data, achieving a Matthews Correlation Coefficient (MCC) score of 0.63 and enabling safe navigation in densely vegetated environments. This work examines different training strategies for voxel-based TE methods and offers recommendations for training strategies to improve adaptability. The proposed method is validated on a robotic platform with limited computational resources (25W GPU), achieving accuracy comparable to offline-trained models while maintaining reliable performance across varied environments.
Under-Canopy Navigation using Aerial Lidar Maps
de Lima, Lucas Carvalho, Lawrance, Nicholas, Khosoussi, Kasra, Borges, Paulo, Bruenig, Michael
Autonomous navigation in unstructured natural environments poses a significant challenge. In goal navigation tasks without prior information, the limited look-ahead of onboard sensors utilised by robots compromises path efficiency. We propose a novel approach that leverages an above-the-canopy aerial map for improved ground robot navigation. Our system utilises aerial lidar scans to create a 3D probabilistic occupancy map, uniquely incorporating the uncertainty in the aerial vehicle's trajectory for improved accuracy. Novel path planning cost functions are introduced, combining path length with obstruction risk estimated from the probabilistic map. The D-Star Lite algorithm then calculates an optimal (minimum-cost) path to the goal. This system also allows for dynamic replanning upon encountering unforeseen obstacles on the ground. Extensive experiments and ablation studies in simulated and real forests demonstrate the effectiveness of our system.
Autonomous Active Mapping in Steep Alpine Environments with Fixed-wing Aerial Vehicles
Lim, Jaeyoung, Achermann, Florian, Lawrance, Nicholas, Siegwart, Roland
Monitoring large scale environments is a crucial task for managing remote alpine environments, especially for hazardous events such as avalanches. One key information for avalanche risk forecast is imagery of released avalanches. As these happen in remote and potentially dangerous locations this data is difficult to obtain. Fixed-wing vehicles, due to their long range and travel speeds are a promising platform to gather aerial imagery to map avalanche activities. However, operating such vehicles in mountainous terrain remains a challenge due to the complex topography, regulations, and uncertain environment. In this work, we present a system that is capable of safely navigating and mapping an avalanche using a fixed-wing aerial system and discuss the challenges arising when executing such a mission. We show in our field experiments that we can effectively navigate in steep terrain environments while maximizing the map quality. We expect our work to enable more autonomous operations of fixed-wing vehicles in alpine environments to maximize the quality of the data gathered.
Alternative Interfaces for Human-initiated Natural Language Communication and Robot-initiated Haptic Feedback: Towards Better Situational Awareness in Human-Robot Collaboration
Bennie, Callum, Casey, Bridget, Paris, Cecile, Kulic, Dana, Tidd, Brendan, Lawrance, Nicholas, Pitt, Alex, Talbot, Fletcher, Williams, Jason, Howard, David, Sikka, Pavan, Senaratne, Hashini
This article presents an implementation of a natural-language speech interface and a haptic feedback interface that enables a human supervisor to provide guidance to, request information, and receive status updates from a Spot robot. We provide insights gained during preliminary user testing of the interface in a realistic robot exploration scenario.
WindSeer: Real-time volumetric wind prediction over complex terrain aboard a small UAV
Achermann, Florian, Stastny, Thomas, Danciu, Bogdan, Kolobov, Andrey, Chung, Jen Jen, Siegwart, Roland, Lawrance, Nicholas
Real-time high-resolution wind predictions are beneficial for various applications including safe manned and unmanned aviation. Current weather models require too much compute and lack the necessary predictive capabilities as they are valid only at the scale of multiple kilometers and hours - much lower spatial and temporal resolutions than these applications require. Our work, for the first time, demonstrates the ability to predict low-altitude wind in real-time on limited-compute devices, from only sparse measurement data. We train a neural network, WindSeer, using only synthetic data from computational fluid dynamics simulations and show that it can successfully predict real wind fields over terrain with known topography from just a few noisy and spatially clustered wind measurements. WindSeer can generate accurate predictions at different resolutions and domain sizes on previously unseen topography without retraining. We demonstrate that the model successfully predicts historical wind data collected by weather stations and wind measured onboard drones.
Safe Low-Altitude Navigation in Steep Terrain with Fixed-Wing Aerial Vehicles
Lim, Jaeyoung, Achermann, Florian, Girod, Rik, Lawrance, Nicholas, Siegwart, Roland
Fixed-wing aerial vehicles provide an efficient way to navigate long distances or cover large areas for environmental monitoring applications. By design, they also require large open spaces due to limited maneuverability. However, strict regulatory and safety altitude limits constrain the available space. Especially in complex, confined, or steep terrain, ensuring the vehicle does not enter an inevitable collision state(ICS) can be challenging. In this work, we propose a strategy to find safe paths that do not enter an ICS while navigating within tight altitude constraints. The method uses periodic paths to efficiently classify ICSs. A sampling-based planner creates collision-free and kinematically feasible paths that begin and end in safe periodic (circular) paths. We show that, in realistic terrain, using circular periodic paths can simplify the goal selection process by making it yaw agnostic and constraining yaw. We demonstrate our approach by dynamically planning safe paths in real-time while navigating steep terrain on a flight test in complex alpine terrain.
ForestTrav: Accurate, Efficient and Deployable Forest Traversability Estimation for Autonomous Ground Vehicles
Ruetz, Fabio, Lawrance, Nicholas, Hernández, Emili, Borges, Paulo, Peynot, Thierry
Autonomous navigation in unstructured vegetated environments remains an open challenge. To successfully operate in these settings, ground vehicles must assess the traversability of the environment and determine which vegetation is pliable enough to push through. In this work, we propose a novel method that combines a high-fidelity and feature-rich 3D voxel representation while leveraging the structural context and sparseness of \acfp{SCNN} to assess \ac{TE} in densely vegetated environments. The proposed method is thoroughly evaluated on an accurately-labeled real-world data set that we provide to the community. It is shown to outperform state-of-the-art methods by a significant margin (0.59 vs. 0.39 MCC score at 0.1m voxel resolution) in challenging scenes and to generalize to unseen environments. In addition, the method is economical in the amount of training data and training time required: a model is trained in minutes on a desktop computer. We show that by exploiting the context of the environment, our method can use different feature combinations with only limited performance variations. For example, our approach can be used with lidar-only features, whilst still assessing complex vegetated environments accurately, which was not demonstrated previously in the literature in such environments. In addition, we propose an approach to assess a traversability estimator's sensitivity to information quality and show our method's sensitivity is low.