Goto

Collaborating Authors

 solid-state lidar


SE-LIO: Semantics-enhanced Solid-State-LiDAR-Inertial Odometry for Tree-rich Environments

Zhang, Tisheng, Wei, Linfu, Tang, Hailiang, Wang, Liqiang, Yuan, Man, Niu, Xiaoji

arXiv.org Artificial Intelligence

In this letter, we propose a semantics-enhanced solid-state-LiDAR-inertial odometry (SE-LIO) in tree-rich environments. Multiple LiDAR frames are first merged and compensated with the inertial navigation system (INS) to increase the point-cloud coverage, thus improving the accuracy of semantic segmentation. The unstructured point clouds, such as tree leaves and dynamic objects, are then removed with the semantic information. Furthermore, the pole-like point clouds, primarily tree trunks, are modeled as cylinders to improve positioning accuracy. An adaptive piecewise cylinder-fitting method is proposed to accommodate environments with a high prevalence of curved tree trunks. Finally, the iterated error-state Kalman filter (IESKF) is employed for state estimation. Point-to-cylinder and point-to-plane constraints are tightly coupled with the prior constraints provided by the INS to obtain the maximum a posteriori estimation. Targeted experiments are conducted in complex campus and park environments to evaluate the performance of SE-LIO. The proposed methods, including removing the unstructured point clouds and the adaptive cylinder fitting, yield improved accuracy. Specifically, the positioning accuracy of the proposed SE-LIO is improved by 43.1% compared to the plane-based LIO.


Towards Robust UAV Tracking in GNSS-Denied Environments: A Multi-LiDAR Multi-UAV Dataset

Catalano, Iacopo, Yu, Xianjia, Queralta, Jorge Pena

arXiv.org Artificial Intelligence

With the increasing prevalence of drones in various industries, the navigation and tracking of unmanned aerial vehicles (UAVs) in challenging environments, particularly GNSS-denied areas, have become crucial concerns. To address this need, we present a novel multi-LiDAR dataset specifically designed for UAV tracking. Our dataset includes data from a spinning LiDAR, two solid-state LiDARs with different Field of View (FoV) and scan patterns, and an RGB-D camera. This diverse sensor suite allows for research on new challenges in the field, including limited FoV adaptability and multi-modality data processing. The dataset facilitates the evaluation of existing algorithms and the development of new ones, paving the way for advances in UAV tracking techniques. Notably, we provide data in both indoor and outdoor environments. We also consider variable UAV sizes, from micro-aerial vehicles to more standard commercial UAV platforms. The outdoor trajectories are selected with close proximity to buildings, targeting research in UAV detection in urban areas, e.g., within counter-UAV systems or docking for UAV logistics. In addition to the dataset, we provide a baseline comparison with recent LiDAR-based UAV tracking algorithms, benchmarking the performance with different sensors, UAVs, and algorithms. Importantly, our dataset shows that current methods have shortcomings and are unable to track UAVs consistently across different scenarios.


Robust Multi-Modal Multi-LiDAR-Inertial Odometry and Mapping for Indoor Environments

Qingqing, Li, Xianjia, Yu, Queralta, Jorge Peña, Westerlund, Tomi

arXiv.org Artificial Intelligence

Integrating multiple LiDAR sensors can significantly enhance a robot's perception of the environment, enabling it to capture adequate measurements for simultaneous localization and mapping (SLAM). Indeed, solid-state LiDARs can bring in high resolution at a low cost to traditional spinning LiDARs in robotic applications. However, their reduced field of view (FoV) limits performance, particularly indoors. In this paper, we propose a tightly-coupled multi-modal multi-LiDAR-inertial SLAM system for surveying and mapping tasks. By taking advantage of both solid-state and spinnings LiDARs, and built-in inertial measurement units (IMU), we achieve both robust and low-drift ego-estimation as well as high-resolution maps in diverse challenging indoor environments (e.g., small, featureless rooms). First, we use spatial-temporal calibration modules to align the timestamp and calibrate extrinsic parameters between sensors. Then, we extract two groups of feature points including edge and plane points, from LiDAR data. Next, with pre-integrated IMU data, an undistortion module is applied to the LiDAR point cloud data. Finally, the undistorted point clouds are merged into one point cloud and processed with a sliding window based optimization module. From extensive experiment results, our method shows competitive performance with state-of-the-art spinning-LiDAR-only or solid-state-LiDAR-only SLAM systems in diverse environments. More results, code, and dataset can be found at \href{https://github.com/TIERS/multi-modal-loam}{https://github.com/TIERS/multi-modal-loam}.


Robust Extrinsic Self-Calibration of Camera and Solid State LiDAR

Liu, Jiahui, Zhan, Xingqun, Chi, Cheng, Zhang, Xin, Zhai, Chuanrun

arXiv.org Artificial Intelligence

This letter proposes an extrinsic calibration approach for a pair of monocular camera and prism-spinning solid-state LiDAR. The unique characteristics of the point cloud measured resulting from the flower-like scanning pattern is first disclosed as the vacant points, a type of outlier between foreground target and background objects. Unlike existing method using only depth continuous measurements, we use depth discontinuous measurements to retain more valid features and efficiently remove vacant points. The larger number of detected 3D corners thus contain more robust a priori information than usual which, together with the 2D corners detected by overlapping cameras and constrained by the proposed circularity and rectangularity rules, produce accurate extrinsic estimates. The algorithm is evaluated with real field experiments adopting both qualitative and quantitative performance criteria, and found to be superior to existing algorithms. The code is available on GitHub.


An Integrated LiDAR-SLAM System for Complex Environment with Noisy Point Clouds

Liu, Kangcheng

arXiv.org Artificial Intelligence

The current LiDAR SLAM (Simultaneous Localization and Mapping) system suffers greatly from low accuracy and limited robustness when faced with complicated circumstances. From our experiments, we find that current LiDAR SLAM systems have limited performance when the noise level in the obtained point clouds is large. Therefore, in this work, we propose a general framework to tackle the problem of denoising and loop closure for LiDAR SLAM in complex environments with many noises and outliers caused by reflective materials. Current approaches for point clouds denoising are mainly designed for small-scale point clouds and can not be extended to large-scale point clouds scenes. In this work, we firstly proposed a lightweight network for large-scale point clouds denoising. Subsequently, we have also designed an efficient loop closure network for place recognition in global optimization to improve the localization accuracy of the whole system. Finally, we have demonstrated by extensive experiments and benchmark studies that our method can have a significant boost on the localization accuracy of the LiDAR SLAM system when faced with noisy point clouds, with a marginal increase in computational cost.


CES 2018: Waiting for the $100 Lidar

#artificialintelligence

For the past decade, the easiest way to spot a self-driving car was to look for the distinctive spinning bucket mounted to its roof. The classic lidar design pioneered by Velodyne spins 64 lasers through 360 degrees, producing a three-dimensional view of the car's surroundings from the reflected laser beams. That complicated and bulky set-up has traditionally also been expensive. Velodyne's US $75,000 lidar famously cost several times the sticker price of the Toyota Priuses that formed the nucleus of Google's original self-driving car fleet. Those days are long gone.


Quanergy Announces 250 Solid-State LIDAR for Cars, Robots, and More

IEEE Spectrum Robotics

Yesterday at CES, Quanergy, an automotive startup based in Sunnyvale, Calif., held a press conference to announce the S3, a solid-state LIDAR system designed primarily to bring versatile, comprehensive, and affordable sensing to autonomous cars. The S3 is small, has no moving parts, and in production volume will be US 250 or less. According to Quanergy, the S3 is better than traditional LIDAR systems in every single way, and will make it easier and cheaper for robots of all kinds to sense what's going on in the world around them. LIDAR systems work by firing laser pulses out into the world and then watching to see if the light reflects off of something. By starting a timer when the pulse goes out and then stopping the timer when the sensor sees a reflection, the LIDAR can do some math to figure out how far away the source of the reflection is. And by keeping careful track of where it's pointing the laser, the LIDAR gets all of the data that it needs to place the point in 3D space.