Stathoulopoulos, Nikolaos
Deployment of an Aerial Multi-agent System for Automated Task Execution in Large-scale Underground Mining Environments
Dahlquist, Niklas, Nordström, Samuel, Stathoulopoulos, Nikolaos, Lindqvist, Björn, Saradagi, Akshit, Nikolakopoulos, George
In this article, we present a framework for deploying an aerial multi-agent system in large-scale subterranean environments with minimal infrastructure for supporting multi-agent operations. The multi-agent objective is to optimally and reactively allocate and execute inspection tasks in a mine, which are entered by a mine operator onthe-fly. The assignment of currently available tasks to the team of agents is accomplished through an auction-based system, where the agents bid for available tasks, which are used by a central auctioneer to optimally assigns tasks to agents. A mobile Wi-Fi mesh supports inter-agent communication and bi-directional communication between the agents and the task allocator, while the task execution is performed completely infrastructure-free. Given a task to be accomplished, a reliable and modular agent behavior is synthesized by generating behavior trees from a pool of agent capabilities, using a back-chaining approach. The auction system in the proposed framework is reactive and supports addition of new operator-specified tasks on-the-go, at any point through a user-friendly operator interface. The framework has been validated in a real underground mining environment using three aerial agents, with several inspection locations spread in an environment of almost 200 meters. The proposed framework can be utilized for missions involving rapid inspection, gas detection, distributed sensing and mapping etc. in a subterranean environment. The proposed framework and its field deployment contributes towards furthering reliable automation in large-scale subterranean environments to offload both routine and dangerous tasks from human operators to autonomous aerial robots. The use of autonomous robotic platforms in industrial production facilities is on the rise, both to increase profitability and to increase safety for human operators [1]. Specifically, in deep underground mining, where the fundamental risk of accidents is high, the industry is focusing on creating a safer environment for humans by deploying robotic systems to either execute dangerous tasks or verify the safety before authorizing human entry. Through efforts in the mining industry, human workers have already been moved to safer locations in several critical operations via, for instance, teleoperation of heavy machinery.
A Minimal Subset Approach for Efficient and Scalable Loop Closure
Stathoulopoulos, Nikolaos, Kanellakis, Christoforos, Nikolakopoulos, George
Loop closure detection in large-scale and long-term missions can be computationally demanding due to the need to identify, verify, and process numerous candidate pairs to establish edge connections for the pose graph optimization. Keyframe sampling mitigates this by reducing the number of frames stored and processed in the back-end system. In this article, we address the gap in optimized keyframe sampling for the combined problem of pose graph optimization and loop closure detection. Our Minimal Subset Approach (MSA) employs an optimization strategy with two key factors, redundancy minimization and information preservation, within a sliding window framework to efficiently reduce redundant keyframes, while preserving essential information. This method delivers comparable performance to baseline approaches, while enhancing scalability and reducing computational overhead. Finally, we evaluate MSA on relevant publicly available datasets, showcasing that it consistently performs across a wide range of environments, without requiring any manual parameter tuning.
Why Sample Space Matters: Keyframe Sampling Optimization for LiDAR-based Place Recognition
Stathoulopoulos, Nikolaos, Sumathy, Vidya, Kanellakis, Christoforos, Nikolakopoulos, George
Recent advances in robotics are pushing real-world autonomy, enabling robots to perform long-term and large-scale missions. A crucial component for successful missions is the incorporation of loop closures through place recognition, which effectively mitigates accumulated pose estimation drift. Despite computational advancements, optimizing performance for real-time deployment remains challenging, especially in resource-constrained mobile robots and multi-robot systems since, conventional keyframe sampling practices in place recognition often result in retaining redundant information or overlooking relevant data, as they rely on fixed sampling intervals or work directly in the 3D space instead of the feature space. To address these concerns, we introduce the concept of sample space in place recognition and demonstrate how different sampling techniques affect the query process and overall performance. We then present a novel keyframe sampling approach for LiDAR-based place recognition, which focuses on redundancy minimization and information preservation in the hyper-dimensional descriptor space. This approach is applicable to both learning-based and handcrafted descriptors, and through the experimental validation across multiple datasets and descriptor frameworks, we demonstrate the effectiveness of our proposed method, showing it can jointly minimize redundancy and preserve essential information in real-time. The proposed approach maintains robust performance across various datasets without requiring parameter tuning, contributing to more efficient and reliable place recognition for a wide range of robotic applications.
FRAME: A Modular Framework for Autonomous Map-merging: Advancements in the Field
Stathoulopoulos, Nikolaos, Lindqvist, Björn, Koval, Anton, Agha-mohammadi, Ali-akbar, Nikolakopoulos, George
In this article, a novel approach for merging 3D point cloud maps in the context of egocentric multi-robot exploration is presented. Unlike traditional methods, the proposed approach leverages state-of-the-art place recognition and learned descriptors to efficiently detect overlap between maps, eliminating the need for the time-consuming global feature extraction and feature matching process. The estimated overlapping regions are used to calculate a homogeneous rigid transform, which serves as an initial condition for the GICP point cloud registration algorithm to refine the alignment between the maps. The advantages of this approach include faster processing time, improved accuracy, and increased robustness in challenging environments. Furthermore, the effectiveness of the proposed framework is successfully demonstrated through multiple field missions of robot exploration in a variety of different underground environments.
RecNet: An Invertible Point Cloud Encoding through Range Image Embeddings for Multi-Robot Map Sharing and Reconstruction
Stathoulopoulos, Nikolaos, Saucedo, Mario A. V., Koval, Anton, Nikolakopoulos, George
In the field of resource-constrained robots and the need for effective place recognition in multi-robotic systems, this article introduces RecNet, a novel approach that concurrently addresses both challenges. The core of RecNet's methodology involves a transformative process: it projects 3D point clouds into depth images, compresses them using an encoder-decoder framework, and subsequently reconstructs the range image, seamlessly restoring the original point cloud. Additionally, RecNet utilizes the latent vector extracted from this process for efficient place recognition tasks. This unique approach not only achieves comparable place recognition results but also maintains a compact representation, suitable for seamless sharing among robots to reconstruct their collective maps. The evaluation of RecNet encompasses an array of metrics, including place recognition performance, structural similarity of the reconstructed point clouds, and the bandwidth transmission advantages, derived from sharing only the latent vectors. This reconstructed map paves a groundbreaking way for exploring its usability in navigation, localization, map-merging, and other relevant missions. Our proposed approach is rigorously assessed using both a publicly available dataset and field experiments, confirming its efficacy and potential for real-world applications.
Redundant and Loosely Coupled LiDAR-Wi-Fi Integration for Robust Global Localization in Autonomous Mobile Robotics
Stathoulopoulos, Nikolaos, Pagliari, Emanuele, Davoli, Luca, Nikolakopoulos, George
This paper presents a framework addressing the challenge of global localization in autonomous mobile robotics by integrating LiDAR-based descriptors and Wi-Fi fingerprinting in a pre-mapped environment. This is motivated by the increasing demand for reliable localization in complex scenarios, such as urban areas or underground mines, requiring robust systems able to overcome limitations faced by traditional Global Navigation Satellite System (GNSS)-based localization methods. By leveraging the complementary strengths of LiDAR and Wi-Fi sensors used to generate predictions and evaluate the confidence of each prediction as an indicator of potential degradation, we propose a redundancy-based approach that enhances the system's overall robustness and accuracy. The proposed framework allows independent operation of the LiDAR and Wi-Fi sensors, ensuring system redundancy. By combining the predictions while considering their confidence levels, we achieve enhanced and consistent performance in localization tasks.
3DEG: Data-Driven Descriptor Extraction for Global re-localization in subterranean environments
Stathoulopoulos, Nikolaos, Koval, Anton, Nikolakopoulos, George
Current global re-localization algorithms are built on top of localization and mapping methods andheavily rely on scan matching and direct point cloud feature extraction and therefore are vulnerable infeatureless demanding environments like caves and tunnels. In this article, we propose a novel globalre-localization framework that: a) does not require an initial guess, like most methods do, while b)it has the capability to offer the top-kcandidates to choose from and last but not least provides anevent-based re-localization trigger module for enabling, and c) supporting completely autonomousrobotic missions. With the focus on subterranean environments with low features, we opt to usedescriptors based on range images from 3D LiDAR scans in order to maintain the depth informationof the environment. In our novel approach, we make use of a state-of-the-art data-driven descriptorextraction framework for place recognition and orientation regression and enhance it with the additionof a junction detection module that also utilizes the descriptors for classification purposes.
Irregular Change Detection in Sparse Bi-Temporal Point Clouds using Learned Place Recognition Descriptors and Point-to-Voxel Comparison
Stathoulopoulos, Nikolaos, Koval, Anton, Nikolakopoulos, George
Change detection and irregular object extraction in 3D point clouds is a challenging task that is of high importance not only for autonomous navigation but also for updating existing digital twin models of various industrial environments. This article proposes an innovative approach for change detection in 3D point clouds using deep learned place recognition descriptors and irregular object extraction based on voxel-to-point comparison. The proposed method first aligns the bi-temporal point clouds using a map-merging algorithm in order to establish a common coordinate frame. Then, it utilizes deep learning techniques to extract robust and discriminative features from the 3D point cloud scans, which are used to detect changes between consecutive point cloud frames and therefore find the changed areas. Finally, the altered areas are sampled and compared between the two time instances to extract any obstructions that caused the area to change. The proposed method was successfully evaluated in real-world field experiments, where it was able to detect different types of changes in 3D point clouds, such as object or muck-pile addition and displacement, showcasing the effectiveness of the approach. The results of this study demonstrate important implications for various applications, including safety and security monitoring in construction sites, mapping and exploration and suggests potential future research directions in this field.
Multimodal Dataset from Harsh Sub-Terranean Environment with Aerosol Particles for Frontier Exploration
Kyuroson, Alexander, Dahlquist, Niklas, Stathoulopoulos, Nikolaos, Viswanathan, Vignesh Kottayam, Koval, Anton, Nikolakopoulos, George
Algorithms for autonomous navigation in environments without Global Navigation Satellite System (GNSS) coverage mainly rely on onboard perception systems. These systems commonly incorporate sensors like cameras and Light Detection and Rangings (LiDARs), the performance of which may degrade in the presence of aerosol particles. Thus, there is a need of fusing acquired data from these sensors with data from Radio Detection and Rangings (RADARs) which can penetrate through such particles. Overall, this will improve the performance of localization and collision avoidance algorithms under such environmental conditions. This paper introduces a multimodal dataset from the harsh and unstructured underground environment with aerosol particles. A detailed description of the onboard sensors and the environment, where the dataset is collected are presented to enable full evaluation of acquired data. Furthermore, the dataset contains synchronized raw data measurements from all onboard sensors in Robot Operating System (ROS) format to facilitate the evaluation of navigation, and localization algorithms in such environments. In contrast to the existing datasets, the focus of this paper is not only to capture both temporal and spatial data diversities but also to present the impact of harsh conditions on captured data. Therefore, to validate the dataset, a preliminary comparison of odometry from onboard LiDARs is presented.
FRAME: Fast and Robust Autonomous 3D point cloud Map-merging for Egocentric multi-robot exploration
Stathoulopoulos, Nikolaos, Koval, Anton, Agha-mohammadi, Ali-akbar, Nikolakopoulos, George
This article presents a 3D point cloud map-merging framework for egocentric heterogeneous multi-robot exploration, based on overlap detection and alignment, that is independent of a manual initial guess or prior knowledge of the robots' poses. The novel proposed solution utilizes state-of-the-art place recognition learned descriptors, that through the framework's main pipeline, offer a fast and robust region overlap estimation, hence eliminating the need for the time-consuming global feature extraction and feature matching process that is typically used in 3D map integration. The region overlap estimation provides a homogeneous rigid transform that is applied as an initial condition in the point cloud registration algorithm Fast-GICP, which provides the final and refined alignment. The efficacy of the proposed framework is experimentally evaluated based on multiple field multi-robot exploration missions in underground environments, where both ground and aerial robots are deployed, with different sensor configurations.