lidar
- North America > United States > Texas (0.04)
- Asia > Singapore (0.04)
- Asia > Japan > Honshū > Kansai > Hyogo Prefecture > Kobe (0.04)
- Information Technology (1.00)
- Health & Medicine > Therapeutic Area (0.46)
- Health & Medicine > Consumer Health (0.46)
- Europe > Netherlands > South Holland > Delft (0.04)
- Asia > South Korea > Daejeon > Daejeon (0.04)
Forget About the LiDAR: Self-Supervised Depth Estimators with MED Probability Volumes
Self-supervised depth estimators have recently shown results comparable to the supervised methods on the challenging single image depth estimation (SIDE) task, by exploiting the geometrical relations between target and reference views in the training data. However, previous methods usually learn forward or backward image synthesis, but not depth estimation, as they cannot effectively neglect occlusions between the target and the reference images. Previous works rely on rigid photometric assumptions or on the SIDE network to infer depth and occlusions, resulting in limited performance. On the other hand, we propose a method to Forget About the LiDAR (FAL), with Mirrored Exponential Disparity (MED) probability volumes for the training of monocular depth estimators from stereo images. Our MED representation allows us to obtain geometrically inspired occlusion maps with our novel Mirrored Occlusion Module (MOM), which does not impose a learning burden on our FAL-net.
RLCNet: An end-to-end deep learning framework for simultaneous online calibration of LiDAR, RADAR, and Camera
Cholakkal, Hafeez Husain, Arrigoni, Stefano, Braghin, Francesco
UTONOMOUS vehicles are poised to revolutionize transportation by improving road safety, reducing traffic congestion, and increasing mobility convenience [1]. To perceive and interact with their environment accurately, these vehicles rely on a combination of complementary sensors, including LiDAR, RADAR, and cameras. Each sensor offers unique advantages: cameras capture rich visual detail, LiDAR provides precise 3D spatial measurements, and RADAR performs robustly under adverse weather conditions [2]. Sensor fusion leverages the strengths of these modalities to ensure redundancy and resilience, allowing the vehicle to maintain accurate perception in diverse and dynamic environments [3]. A critical component of sensor fusion is extrinsic calibration, which involves the determination of the relative positions and orientations of sensors in a common coordinate frame. However, maintaining precise calibration over time is a persistent challenge. Factors such as mechanical vibrations, temperature changes, and minor collisions can lead to sensor drift, where even small misalignments in sensor orientation or position can result in substantial perception errors, potentially compromising vehicle safety.
TEMPO-VINE: A Multi-Temporal Sensor Fusion Dataset for Localization and Mapping in Vineyards
Martini, Mauro, Ambrosio, Marco, Vilella-Cantos, Judith, Navone, Alessandro, Chiaberge, Marcello
In recent years, precision agriculture has been introducing groundbreaking innovations in the field, with a strong focus on automation. However, research studies in robotics and autonomous navigation often rely on controlled simulations or isolated field trials. The absence of a realistic common benchmark represents a significant limitation for the diffusion of robust autonomous systems under real complex agricultural conditions. Vineyards pose significant challenges due to their dynamic nature, and they are increasingly drawing attention from both academic and industrial stakeholders interested in automation. In this context, we introduce the TEMPO-VINE dataset, a large-scale multi-temporal dataset specifically designed for evaluating sensor fusion, simultaneous localization and mapping (SLAM), and place recognition techniques within operational vineyard environments. TEMPO-VINE is the first multi-modal public dataset that brings together data from heterogeneous LiDARs of different price levels, AHRS, RTK-GPS, and cameras in real trellis and pergola vineyards, with multiple rows exceeding 100 m in length. In this work, we address a critical gap in the landscape of agricultural datasets by providing researchers with a comprehensive data collection and ground truth trajectories in different seasons, vegetation growth stages, terrain and weather conditions. The sequence paths with multiple runs and revisits will foster the development of sensor fusion, localization, mapping and place recognition solutions for agricultural fields. The dataset, the processing tools and the benchmarking results will be available at the dedicated webpage upon acceptance.
nuScenes Revisited: Progress and Challenges in Autonomous Driving
Fong, Whye Kit, Liong, Venice Erin, Tan, Kok Seang, Caesar, Holger
Autonomous Vehicles (AV) and Advanced Driver Assistance Systems (ADAS) have been revolutionized by Deep Learning. As a data-driven approach, Deep Learning relies on vast amounts of driving data, typically labeled in great detail. As a result, datasets, alongside hardware and algorithms, are foundational building blocks for the development of AVs. In this work we revisit one of the most widely used autonomous driving datasets: the nuScenes dataset. nuScenes exemplifies key trends in AV development, being the first dataset to include radar data, to feature diverse urban driving scenes from two continents, and to be collected using a fully autonomous vehicle operating on public roads, while also promoting multi-modal sensor fusion, standardized benchmarks, and a broad range of tasks including perception, localization \& mapping, prediction and planning. We provide an unprecedented look into the creation of nuScenes, as well as its extensions nuImages and Panoptic nuScenes, summarizing many technical details that have hitherto not been revealed in academic publications. Furthermore, we trace how the influence of nuScenes impacted a large number of other datasets that were released later and how it defined numerous standards that are used by the community to this day. Finally, we present an overview of both official and unofficial tasks using the nuScenes dataset and review major methodological developments, thereby offering a comprehensive survey of the autonomous driving literature, with a particular focus on nuScenes.
- Transportation > Ground > Road (1.00)
- Information Technology > Robotics & Automation (1.00)
- Automobiles & Trucks (1.00)
Visibility-aware Cooperative Aerial Tracking with Decentralized LiDAR-based Swarms
Yin, Longji, Ren, Yunfan, Zhu, Fangcheng, Shi, Liuyu, Kong, Fanze, Tang, Benxu, Liu, Wenyi, Lyu, Ximin, Zhang, Fu
Abstract--Autonomous aerial tracking with drones offers vast potential for surveillance, cinematography, and industrial inspection applications. While single-drone tracking systems have been extensively studied, swarm-based target tracking remains underexplored, despite its unique advantages of distributed perception, fault-tolerant redundancy, and multidirectional target coverage. T o bridge this gap, we propose a novel decentralized LiDAR-based swarm tracking framework that enables visibility-aware, cooperative target tracking in complex environments, while fully harnessing the unique capabilities of swarm systems. T o address visibility, we introduce a novel Spherical Signed Distance Field (SSDF)-based metric for 3-D environmental occlusion representation, coupled with an efficient algorithm that enables real-time onboard SSDF updating. A general Field-of-View (FOV) alignment cost supporting heterogeneous LiDAR configurations is proposed for consistent target observation. These innovations are integrated into a hierarchical planner, combining a kinodynamic front-end searcher with a spatiotemporal SE(3) back-end optimizer to generate collision-free, visibility-optimized trajectories. The proposed approach undergoes thorough evaluation through comprehensive benchmark comparisons and ablation studies. Deployed on heterogeneous LiDAR swarms, our fully decentralized implementation features collaborative perception, distributed planning, and dynamic swarm reconfigurability. V alidated through rigorous real-world experiments in cluttered outdoor environments, the proposed system demonstrates robust cooperative tracking of agile targets (drones, humans) while achieving superior visibility maintenance. This work establishes a systematic solution for swarm-based target tracking, and its source code will be released to benefit the community. Recent studies highlight the unique suitability of UA Vs for tracking dynamic targets in complex environments, owing to their highly agile three-dimensional (3-D) maneuverability. While substantial progress has been made in single-UA V tracking, the swarm-based aerial tracking remains underexplored. The authors are with the Department of Mechanical Engineering, The University of Hong Kong, Hong Kong. X. Lyu is with the School of Intelligent System Engineering, Sun Y at-sen University, Shenzhen, China. A swarm of four autonomous drones is cooperatively tracking a human runner using heterogeneous LiDAR configurations. The LiDAR setup consists of one upward-facing Mid360 LiDAR (marked by blue dashed lines), one downward-facing Mid360 LiDAR (green dashed lines), and two Avia LiDARs (red dashed lines). The swarm forms a 3-D distribution to track the target, with each tracker positioned optimally to suit its FOV settings. Effective agile aerial tracking with autonomous swarms primarily relies on three criteria: visibility, coordination, and portability.
- Transportation (0.67)
- Aerospace & Defense (0.67)
- Information Technology > Robotics & Automation (0.46)