DOCTR: Disentangled Object-Centric Transformer for Point Scene Understanding
Yu, Xiaoxuan, Wang, Hao, Li, Weiming, Wang, Qiang, Cho, Soonyong, Sung, Younghun
–arXiv.org Artificial Intelligence
Point scene understanding is a challenging task to process real-world scene point cloud, which aims at segmenting each object, estimating its pose, and reconstructing its mesh simultaneously. Recent state-of-the-art method first segments each object and then processes them independently with multiple stages for the different sub-tasks. This leads to a complex pipeline to optimize and makes it hard to leverage the relationship constraints between multiple objects. In this work, we propose a novel Disentangled Object-Centric TRansformer (DOCTR) that explores object-centric representation to facilitate learning with multiple objects for the multiple sub-tasks in a unified manner. Each object is represented as a query, and a Transformer decoder is adapted to iteratively optimize all the queries involving their relationship. In particular, we introduce a semantic-geometry disentangled query (SGDQ) design that enables the query features to attend separately to semantic information and geometric information relevant to the corresponding sub-tasks. A hybrid bipartite matching module is employed to well use the supervisions from all the sub-tasks during training. Qualitative and quantitative experimental results demonstrate that our method achieves state-of-the-art performance on the challenging ScanNet dataset. Code is available at https://github.com/SAITPublic/DOCTR.
arXiv.org Artificial Intelligence
Mar-25-2024
- Country:
- Asia
- China (0.14)
- Middle East > Israel (0.14)
- Asia
- Genre:
- Research Report
- New Finding (0.34)
- Promising Solution (0.34)
- Research Report
- Technology: