Manboformer: Learning Gaussian Representations via Spatial-temporal Attention Mechanism
Zhao, Ziyue, Qi, Qining, Ma, Jianfa
–arXiv.org Artificial Intelligence
Compared with voxel-based grid prediction, in the field of 3D semantic occupation prediction for autonomous driving, GaussianFormer proposed using 3D Gaussian to describe scenes with sparse 3D semantic Gaussian based on objects is another scheme with lower memory requirements. Each 3D Gaussian function represents a flexible region of interest and its semantic features, which are iteratively refined by the attention mechanism. In the experiment, it is found that the Gaussian function required by this method is larger than the query resolution of the original dense grid network, resulting in impaired performance. Therefore, we consider optimizing GaussianFormer by using unused temporal information. We learn the Spatial-Temporal Self-attention Mechanism from the previous grid-given occupation network and improve it to GaussianFormer. The experiment was conducted with the NuScenes dataset, and the experiment is currently underway.
arXiv.org Artificial Intelligence
Mar-6-2025
- Country:
- Asia (0.14)
- Genre:
- Research Report (0.64)
- Industry:
- Information Technology (0.35)
- Transportation > Ground
- Road (0.35)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning
- Neural Networks (1.00)
- Statistical Learning (1.00)
- Representation & Reasoning > Spatial Reasoning (0.70)
- Robots (1.00)
- Vision (1.00)
- Machine Learning
- Information Technology > Artificial Intelligence