3DRot: 3D Rotation Augmentation for RGB-Based 3D Tasks
Yang, Shitian, Li, Deyu, Jiang, Xiaoke, Zhang, Lei
–arXiv.org Artificial Intelligence
RGB-based 3D tasks, e.g., 3D detection, depth estimation, 3D keypoint estimation, still suffer from scarce, expensive annotations and a thin augmentation toolbox, since most image transforms, including resize and rotation, disrupt geometric consistency. In this paper, we introduce 3DRot, a plug-and-play augmentation that rotates and mirrors images about the camera's optical center while synchronously updating RGB images, camera intrinsics, object poses, and 3D annotations to preserve projective geometry-achieving geometry-consistent rotations and reflections without relying on any scene depth. We validate 3DRot with a classical 3D task, monocular 3D detection. On SUN RGB-D dataset, 3DRot raises $IoU_{3D}$ from 43.21 to 44.51, cuts rotation error (ROT) from 22.91$^\circ$ to 20.93$^\circ$, and boosts $mAP_{0.5}$ from 35.70 to 38.11. As a comparison, Cube R-CNN adds 3 other datasets together with SUN RGB-D for monocular 3D estimation, with a similar mechanism and test dataset, increases $IoU_{3D}$ from 36.2 to 37.8, boosts $mAP_{0.5}$ from 34.7 to 35.4. Because it operates purely through camera-space transforms, 3DRot is readily transferable to other 3D tasks.
arXiv.org Artificial Intelligence
Aug-6-2025
- Country:
- Asia > China
- Guangdong Province > Shenzhen (0.04)
- Europe
- Switzerland (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.14)
- North America > Canada (0.04)
- South America > Brazil (0.05)
- Asia > China
- Genre:
- Research Report (0.50)
- Industry:
- Transportation (0.46)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (1.00)
- Robots (0.94)
- Vision > Image Understanding (0.34)
- Information Technology > Artificial Intelligence