ugv
A Multi-Robot Platform for Robotic Triage Combining Onboard Sensing and Foundation Models
Hughes, Jason, Hussing, Marcel, Zhang, Edward, Kannapiran, Shenbagaraj, Caswell, Joshua, Chaney, Kenneth, Deng, Ruichen, Feehery, Michaela, Kratimenos, Agelos, Li, Yi Fan, Major, Britny, Sanchez, Ethan, Shrote, Sumukh, Wang, Youkang, Wang, Jeremy, Zein, Daudi, Zhang, Luying, Zhang, Ruijun, Zhou, Alex, Zhouga, Tenzi, Cannon, Jeremy, Qasim, Zaffir, Yelon, Jay, Cladera, Fernando, Daniilidis, Kostas, Taylor, Camillo J., Eaton, Eric
Abstract-- This report presents a heterogeneous robotic system designed for remote primary triage in mass-casualty incidents (MCIs). The system employs a coordinated air-ground team of unmanned aerial vehicles (UA Vs) and unmanned ground vehicles (UGVs) to locate victims, assess their injuries, and prioritize medical assistance without risking the lives of first responders. The UA V identify and provide overhead views of casualties, while UGVs equipped with specialized sensors measure vital signs and detect and localize physical injuries. Unlike previous work that focused on exploration or limited medical evaluation, this system addresses the complete triage process: victim localization, vital sign measurement, injury severity classification, mental status assessment, and data consolidation for first responders. Developed as part of the DARPA Triage Challenge, this approach demonstrates how multi-robot systems can augment human capabilities in disaster response scenarios to maximize lives saved. I. INTRODUCTION Robotics has long sought to augment human capabilities in hazardous scenarios. Mass-casualty incidents (MCIs), such as those resulting from natural disasters, bombings, plane crashes, or industrial chemical spills, present an opportunity for robotic systems to assist first responders. The critical first step of providing medical assistance during MCIs is primary triage: the initial process of locating victims at the site of the MCI and assessing the severity of their injuries to prioritize treatment, which is essential to optimizing survival outcomes. Traditionally, primary triage relies on human responders who may face significant risk and information overload [1], underscoring the potential for automated systems to mitigate these challenges. While prior efforts have explored the use of air-ground robotic teams for search and exploration in disaster zones [2]-[5], few systems have focused specifically on rapid triage. Existing approaches typically solve parts of the problem in isolation without integrating comprehensive triage functions. For example, air-ground teams have also been developed to find and localize objects of interest [3], [6] Authors are with the GRASP Lab, School of Engineering and Applied Sciences, University of Pennsylvania. Authors are with the Perelman School of Medicine, University of Pennsylvania. This work was supported by the DARP A Triage Challenge under grant #HR001123S0011.
- North America > United States > Pennsylvania (0.44)
- North America > United States > Texas (0.04)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.04)
- Asia > Japan > Honshū > Kansai > Hyogo Prefecture > Kobe (0.04)
- Transportation > Air (1.00)
- Health & Medicine > Therapeutic Area (0.99)
- Health & Medicine > Diagnostic Medicine > Vital Signs (0.90)
- (2 more...)
NeuroHJR: Hamilton-Jacobi Reachability-based Obstacle Avoidance in Complex Environments with Physics-Informed Neural Networks
Halder, Granthik, Majumder, Rudrashis, R, Rakshith M, Shah, Rahi, Sundaram, Suresh
Autonomous ground vehicles (AGVs) must navigate safely in cluttered environments while accounting for complex dynamics and environmental uncertainty. Hamilton-Jacobi Reachability (HJR) offers formal safety guarantees through the computation of forward and backward reachable sets, but its application is hindered by poor scalability in environments with numerous obstacles. In this paper, we present a novel framework called NeuroHJR that leverages Physics-Informed Neural Networks (PINNs) to approximate the HJR solution for real-time obstacle avoidance. By embedding system dynamics and safety constraints directly into the neural network loss function, our method bypasses the need for grid-based discretization and enables efficient estimation of reachable sets in continuous state spaces. We demonstrate the effectiveness of our approach through simulation results in densely cluttered scenarios, showing that it achieves safety performance comparable to that of classical HJR solvers while significantly reducing the computational cost. This work provides a new step toward real-time, scalable deployment of reachability-based obstacle avoidance in robotics.
In Ukraine's 'kill-zone', robots are a lifeline to troops trapped on perilous eastern front
In Ukraine's'kill-zone', robots are a lifeline to troops trapped on perilous eastern front The toy is delivered, a Ukrainian soldier whispers into the radio. In the dead of night, he and his partner move quickly to roll out their cargo from a van. Speed is crucial as they are within the range of deadly Russian drones. The fifth brigade's new toy is an unmanned ground vehicle (UGV), a robot that provides a lifeline for Ukrainian troops at the front in Pokrovsk and Myrnograd, a strategic hub in eastern Ukraine. Russian forces are relentlessly trying to cut off Ukraine's supply routes in the area.
- North America > United States (0.71)
- Asia > Russia (0.17)
- South America (0.15)
- (17 more...)
- Government > Military (1.00)
- Government > Regional Government > Europe Government (0.70)
- Government > Regional Government > North America Government > United States Government (0.49)
Long Duration Inspection of GNSS-Denied Environments with a Tethered UAV-UGV Marsupial System
Martínez-Rozas, Simón, Alejo, David, Carpio, José Javier, Caballero, Fernando, Merino, Luis
Unmanned Aerial Vehicles (UAVs) have become essential tools in inspection and emergency response operations due to their high maneuverability and ability to access hard-to-reach areas. However, their limited battery life significantly restricts their use in long-duration missions. This paper presents a tethered marsupial robotic system composed of a UAV and an Unmanned Ground Vehicle (UGV), specifically designed for autonomous, long-duration inspection tasks in Global Navigation Satellite System (GNSS)-denied environments. The system extends the UAV's operational time by supplying power through a tether connected to high-capacity battery packs carried by the UGV. Our work details the hardware architecture based on off-the-shelf components to ensure replicability and describes our full-stack software framework used by the system, which is composed of open-source components and built upon the Robot Operating System (ROS). The proposed software architecture enables precise localization using a Direct LiDAR Localization (DLL) method and ensures safe path planning and coordinated trajectory tracking for the integrated UGV-tether-UAV system. We validate the system through three sets of field experiments involving (i) three manual flight endurance tests to estimate the operational duration, (ii) three experiments for validating the localization and the trajectory tracking systems, and (iii) three executions of an inspection mission to demonstrate autonomous inspection capabilities. The results of the experiments confirm the robustness and autonomy of the system in GNSS-denied environments. Finally, all experimental data have been made publicly available to support reproducibility and to serve as a common open dataset for benchmarking.
- North America > United States > New York > New York County > New York City (0.05)
- South America > Chile > Antofagasta Region > Antofagasta Province > Antofagasta (0.04)
- Europe > Spain > Andalusia > Seville Province > Seville (0.04)
- (16 more...)
- Materials (1.00)
- Energy > Energy Storage (1.00)
- Electrical Industrial Apparatus (1.00)
- (3 more...)
BIM-Discrepancy-Driven Active Sensing for Risk-Aware UAV-UGV Navigation
Mojtahedi, Hesam, Akhavian, Reza
This paper presents a BIM-discrepancy-driven active sensing framework for cooperative navigation between unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) in dynamic construction environments. Traditional navigation approaches rely on static Building Information Modeling (BIM) priors or limited onboard perception. In contrast, our framework continuously fuses real-time LiDAR data from aerial and ground robots with BIM priors to maintain an evolving 2D occupancy map. We quantify navigation safety through a unified corridor-risk metric integrating occupancy uncertainty, BIM-map discrepancy, and clearance. When risk exceeds safety thresholds, the UAV autonomously re-scans affected regions to reduce uncertainty and enable safe replanning. Compared to frontier-based exploration, our approach achieves similar uncertainty reduction in half the mission time. These results demonstrate that integrating BIM priors with risk-adaptive aerial sensing enables scalable, uncertainty-aware autonomy for construction robotics. Introduction Construction sites are among the most dynamic, unstructured, and safety-critical environments for autonomous robots. Unlike factory floors or structured indoor spaces, these environments are marked by continual change. New buildings are erected, materials are relocated, and the movement of heavy machinery and workers can be unpredictable. Such conditions make autonomous navigation particularly challenging. Construction 4.0 [1], emphasizing automation and digitalization, is moving robotics from trial phases to regular use on construction sites.
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > United States > California > San Diego County > La Jolla (0.04)
- Europe > Germany > Rhineland-Palatinate > Kaiserslautern (0.04)
- Information Technology (0.67)
- Construction & Engineering (0.46)
Semi-distributed Cross-modal Air-Ground Relative Localization
Lu, Weining, Bin, Deer, Ma, Lian, Ma, Ming, Ma, Zhihao, Chen, Xiangyang, Wang, Longfei, Feng, Yixiao, Jiang, Zhouxian, Shi, Yongliang, Liang, Bin
Efficient, accurate, and flexible relative localization is crucial in air-ground collaborative tasks. However, current approaches for robot relative localization are primarily realized in the form of distributed multi-robot SLAM systems with the same sensor configuration, which are tightly coupled with the state estimation of all robots, limiting both flexibility and accuracy. To this end, we fully leverage the high capacity of Unmanned Ground Vehicle (UGV) to integrate multiple sensors, enabling a semi-distributed cross-modal air-ground relative localization framework. In this work, both the UGV and the Unmanned Aerial Vehicle (UAV) independently perform SLAM while extracting deep learning-based keypoints and global descriptors, which decouples the relative localization from the state estimation of all agents. The UGV employs a local Bundle Adjustment (BA) with LiDAR, camera, and an IMU to rapidly obtain accurate relative pose estimates. The BA process adopts sparse keypoint optimization and is divided into two stages: First, optimizing camera poses interpolated from LiDAR-Inertial Odometry (LIO), followed by estimating the relative camera poses between the UGV and UAV. Additionally, we implement an incremental loop closure detection algorithm using deep learning-based descriptors to maintain and retrieve keyframes efficiently. Experimental results demonstrate that our method achieves outstanding performance in both accuracy and efficiency. Unlike traditional multi-robot SLAM approaches that transmit images or point clouds, our method only transmits keypoint pixels and their descriptors, effectively constraining the communication bandwidth under 0.3 Mbps. Codes and data will be publicly available on https://github.com/Ascbpiac/cross-model-relative-localization.git.
GLIDE: A Coordinated Aerial-Ground Framework for Search and Rescue in Unknown Environments
Farrell, Seth, Li, Chenghao, Yu, Hongzhan, Mojtahedi, Hesam, Gao, Sicun, Christensen, Henrik I.
Abstract-- We present a cooperative aerial-ground search-and-rescue (SAR) framework that pairs two unmanned aerial vehicles (UA Vs) with an unmanned ground vehicle (UGV) to achieve rapid victim localization and obstacle-aware navigation in unknown environments. In our framework, a goal-searching UA V executes real-time onboard victim detection and georeferencing to nominate goals for the ground platform, while a terrain-scouting UA V flies ahead of the UGV's planned route to provide mid-level traversability updates. The UGV fuses aerial cues with local sensing to perform time-efficient A* planning and continuous replanning as information arrives. Additionally, we present a hardware demonstration (using a GEM e6 golf cart as the UGV and two X500 UA Vs) to evaluate end-to-end SAR mission performance and include simulation ablations to assess the planning stack in isolation from detection. Empirical results demonstrate that explicit role separation across UA Vs, coupled with terrain scouting and guided planning, improves reach time and navigation safety in time-critical SAR missions. Search and rescue (SAR) operations stand to benefit from recent advances in autonomous aerial and ground robotics. Unmanned Aerial V ehicles (UA Vs) enable rapid, large-area coverage due to their agility and mobility. The adoption of drones across civilian and military applications has highlighted advantages in speed and perspective.
- Transportation > Air (0.46)
- Information Technology > Robotics & Automation (0.34)
An Adaptive Coverage Control Approach for Multiple Autonomous Off-road Vehicles in Dynamic Agricultural Fields
Ahmadi, Sajad, Davoodi, Mohammadreza, Velni, Javad Mohammadpour
This paper presents an adaptive coverage control method for a fleet of off-road and Unmanned Ground Vehicles (UGVs) operating in dynamic (time-varying) agricultural environments. Traditional coverage control approaches often assume static conditions, making them unsuitable for real-world farming scenarios where obstacles, such as moving machinery and uneven terrains, create continuous challenges. To address this, we propose a real-time path planning framework that integrates Unmanned Aerial Vehicles (UAVs) for obstacle detection and terrain assessment, allowing UGVs to dynamically adjust their coverage paths. The environment is modeled as a weighted directed graph, where the edge weights are continuously updated based on the UAV observations to reflect obstacle motion and terrain variations. The proposed approach incorporates Voronoi-based partitioning, adaptive edge weight assignment, and cost-based path optimization to enhance navigation efficiency. Simulation results demonstrate the effectiveness of the proposed method in improving path planning, reducing traversal costs, and maintaining robust coverage in the presence of dynamic obstacles and muddy terrains.
- Africa > Middle East > Algeria > Béchar Province > Béchar (0.05)
- North America > United States > Tennessee > Shelby County > Memphis (0.04)
- North America > United States > Massachusetts (0.04)
- Food & Agriculture > Agriculture (1.00)
- Automobiles & Trucks (1.00)
UAV See, UGV Do: Aerial Imagery and Virtual Teach Enabling Zero-Shot Ground Vehicle Repeat
Fisker, Desiree, Krawciw, Alexander, Lilge, Sven, Greeff, Melissa, Barfoot, Timothy D.
-- This paper presents Virtual T each and Repeat (VirT&R): an extension of the T each and Repeat (T&R) framework that enables GPS-denied, zero-shot autonomous ground vehicle navigation in untraversed environments. VirT&R leverages aerial imagery captured for a target environment to train a Neural Radiance Field (NeRF) model so that dense point clouds and photo-textured meshes can be extracted. The NeRF mesh is used to create a high-fidelity simulation of the environment for piloting an unmanned ground vehicle (UGV) to virtually define a desired path. The mission can then be executed in the actual target environment by using NeRF-generated point cloud submaps associated along the path and an existing LiDAR T each and Repeat (L T&R) framework. We benchmark the repeatability of VirT&R on over 12 km of autonomous driving data using physical markings that allow a sim-to-real lateral path-tracking error to be obtained and compared with L T&R. VirT&R achieved measured root mean squared errors (RMSE) of 19.5 cm and 18.4 cm in two different environments, which are slightly less than one tire width (24 cm) on the robot used for testing, and respective maximum errors were 39.4 cm and 47.6 cm. This was done using only the NeRF-derived teach map, demonstrating that VirT&R has similar closed-loop path-tracking performance to L T&R but does not require a human to manually teach the path to the UGV in the actual environment. I. INTRODUCTION Enabling a higher level of autonomous navigation in remote, harsh, and potentially hazardous environments is a critical objective for many Unmanned Ground V ehicle (UGV) operations, as minimizing human presence in such scenarios reduces risk and lowers costs. Visual Teach and Repeat (VT&R) [1], is a complete autonomy stack that enables long-range navigation along previously taught routes, demonstrated on a UGV with 3D-LiDAR [2]-[4], Radar [5], and RGB vision sensors [1], as well as on a UA V with an RGB vision sensor [6], [7]. While Teach and Repeat (T&R) has demonstrated considerable success, it currently requires a human operator to manually guide the vehicle in the environment during the teaching phase to create a map and ensure traversability.
- North America > Canada > Ontario > Toronto (0.14)
- North America > Canada > Ontario > Kingston (0.04)
- Asia > Middle East > Republic of Türkiye > Karaman Province > Karaman (0.04)
- Information Technology > Robotics & Automation (0.49)
- Transportation > Ground > Road (0.34)
- Energy > Renewable > Geothermal (0.34)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles > Drones (1.00)
RaGNNarok: A Light-Weight Graph Neural Network for Enhancing Radar Point Clouds on Unmanned Ground Vehicles
Hunt, David, Luo, Shaocheng, Hallyburton, Spencer, Nillongo, Shafii, Li, Yi, Chen, Tingjun, Pajic, Miroslav
Low-cost indoor mobile robots have gained popularity with the increasing adoption of automation in homes and commercial spaces. However, existing lidar and camera-based solutions have limitations such as poor performance in visually obscured environments, high computational overhead for data processing, and high costs for lidars. In contrast, mmWave radar sensors offer a cost-effective and lightweight alternative, providing accurate ranging regardless of visibility. However, existing radar-based localization suffers from sparse point cloud generation, noise, and false detections. Thus, in this work, we introduce RaGNNarok, a real-time, lightweight, and generalizable graph neural network (GNN)-based framework to enhance radar point clouds, even in complex and dynamic environments. With an inference time of just 7.3 ms on the low-cost Raspberry Pi 5, RaGNNarok runs efficiently even on such resource-constrained devices, requiring no additional computational resources. We evaluate its performance across key tasks, including localization, SLAM, and autonomous navigation, in three different environments. Our results demonstrate strong reliability and generalizability, making RaGNNarok a robust solution for low-cost indoor mobile robots.
- Africa > Kenya > Narok County > Narok (0.04)
- North America > United States > North Carolina > Durham County > Durham (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- (3 more...)