millimeter-wave radar
Depth-aware Fusion Method based on Image and 4D Radar Spectrum for 3D Object Detection
Sun, Yue, Qian, Yeqiang, Wang, Chunxiang, Yang, Ming
Safety and reliability are crucial for the public acceptance of autonomous driving. To ensure accurate and reliable environmental perception, intelligent vehicles must exhibit accuracy and robustness in various environments. Millimeter-wave radar, known for its high penetration capability, can operate effectively in adverse weather conditions such as rain, snow, and fog. Traditional 3D millimeter-wave radars can only provide range, Doppler, and azimuth information for objects. Although the recent emergence of 4D millimeter-wave radars has added elevation resolution, the radar point clouds remain sparse due to Constant False Alarm Rate (CFAR) operations. In contrast, cameras offer rich semantic details but are sensitive to lighting and weather conditions. Hence, this paper leverages these two highly complementary and cost-effective sensors, 4D millimeter-wave radar and camera. By integrating 4D radar spectra with depth-aware camera images and employing attention mechanisms, we fuse texture-rich images with depth-rich radar data in the Bird's Eye View (BEV) perspective, enhancing 3D object detection. Additionally, we propose using GAN-based networks to generate depth images from radar spectra in the absence of depth sensors, further improving detection accuracy.
- Asia > China > Shanghai > Shanghai (0.05)
- Europe > Netherlands > South Holland > Delft (0.04)
- North America > United States (0.04)
- Information Technology (0.49)
- Automobiles & Trucks (0.49)
- Transportation > Ground > Road (0.35)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles (0.70)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Information Fusion (0.64)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.47)
KAN-RCBEVDepth: A multi-modal fusion algorithm in object detection for autonomous driving
Lai, Zhihao, Liu, Chuanhao, Sheng, Shihui, Zhang, Zhiqiang
Abstract-- Accurate 3D object detection in autonomous driving is critical yet challenging due to occlusions, varying object sizes, and complex urban environments. This paper introduces the KAN-RCBEVDepth method, an innovative approach aimed at enhancing 3D object detection by fusing multimodal sensor data from cameras, LiDAR, and millimeter-wave radar. Our unique Bird's Eye View-based approach significantly improves detection accuracy and efficiency by seamlessly integrating diverse sensor inputs, refining spatial relationship understanding, and optimizing computational procedures. Experimental results show that the proposed method outperforms existing techniques across multiple detection metrics, achieving a higher Mean Distance AP (0.389, 23% improvement), a better ND Score (0.485, 17.1% improvement), and a faster Evaluation As illustrated in Figure 1, these sensors' complementary LiDAR delivers high-precision 3D point cloud data crucial Accurate 3D object detection is a critical component of for accurate depth perception. By leveraging the strengths of autonomous driving systems, enabling vehicles to perceive each sensor type, sensor fusion mitigates their weaknesses, their environment in three dimensions and precisely identify thereby enhancing the overall performance of 3D object and localize surrounding objects such as vehicles, including detection systems.
- Asia > China > Shaanxi Province > Xi'an (0.05)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- North America > United States > Texas (0.04)
- (4 more...)
- Transportation > Ground > Road (0.91)
- Information Technology > Robotics & Automation (0.81)
- Automobiles & Trucks (0.81)
DIDLM:A Comprehensive Multi-Sensor Dataset with Infrared Cameras, Depth Cameras, LiDAR, and 4D Millimeter-Wave Radar in Challenging Scenarios for 3D Mapping
Gong, WeiSheng, He, Chen, Su, KaiJie, Li, QingYong
This study presents a comprehensive multi-sensor dataset designed for 3D mapping in challenging indoor and outdoor environments. The dataset comprises data from infrared cameras, depth cameras, LiDAR, and 4D millimeter-wave radar, facilitating exploration of advanced perception and mapping techniques. Integration of diverse sensor data enhances perceptual capabilities in extreme conditions such as rain, snow, and uneven road surfaces. The dataset also includes interactive robot data at different speeds indoors and outdoors, providing a realistic background environment. Slam comparisons between similar routes are conducted, analyzing the influence of different complex scenes on various sensors. Various SLAM algorithms are employed to process the dataset, revealing performance differences among algorithms in different scenarios. In summary, this dataset addresses the problem of data scarcity in special environments, fostering the development of perception and mapping algorithms for extreme conditions. Leveraging multi-sensor data including infrared, depth cameras, LiDAR, 4D millimeter-wave radar, and robot interactions, the dataset advances intelligent mapping and perception capabilities.Our dataset is available at https://github.com/GongWeiSheng/DIDLM.
- Europe > Czechia > South Moravian Region > Brno (0.05)
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Robots (1.00)
Is LiDAR the Future of the Self-Driving Industry?
If you are not as paranoid as Musk, automatic driving may not need to divide any technical routes. But standing on the opposite side of LiDAR, Tesla may have missed the best time to develop fully autonomous driving. More info: What is LiDAR? LiDAR is not to replace millimeter-wave radar and vision, but to match with other sensors as a heterogeneous sensor. Through these three different sensors, a heterogeneous fusion can be made to ensure the overall perception security and improve sensitivity and accuracy.
How Data Labeling Services Empower Self-Driving Industry 2021? -- Part4
If you are not as paranoid as Musk, automatic driving may not need to divide any technical routes, but only need to optimize the technology. But standing on the opposite side of lidar, Tesla may have missed the best time to develop fully autonomous driving. Lidar is not to replace millimeter-wave radar and vision, but to match with other sensors as a heterogeneous sensor. Through these three different sensors, a heterogeneous fusion can be made to ensure the overall perception security and improve sensitivity and accuracy. Different from the traditional mechanical rotary lidar, Suteng, a Chinese company mainly adopt MEMS technology, which has the advantages of small volume, easy integration, low energy consumption, and low cost.
- Transportation > Ground > Road (0.52)
- Automobiles & Trucks (0.52)
- Information Technology > Robotics & Automation (0.37)