APR-Transformer: Initial Pose Estimation for Localization in Complex Environments through Absolute Pose Regression

Ravuri, Srinivas, Xu, Yuan, Zehetner, Martin Ludwig, Motlag, Ketan, Albayrak, Sahin

arXiv.org Artificial Intelligence 

Afterwards, we remove the last propagation layer and classification head and use the remaining components as backbone for our APR-Transformer. We utilize the output of the last remaining propagation layer as feature vectors F x and F q at resolutions of (N, 128, 1024). With each of the 128 vectors corresponding to a reduced point of the original 4096-point data input. The Transformer-compatible input embeddings and associated learned encodings, preserving the spatial information of the backbone outputs, are then computed by first separating the 128 vectors into eight groups based on the absolute z -coordinates, i.e., height, of their corresponding reduced points. Subsequently, the 16 feature vectors per group are sorted in a 4 4 grid based on the x and y coordinates of their reduced points. Afterward, we adapt the procedure used in the image-based APR-Transformer case by computing the learned positional encodings along the three axes to generate the final Transformer inputs. C. Pose Regression and Loss Function L p( x) = D null i =1|x p i x t i| (1) L o(q) = D null i =1|q p i q t i| (2) L pose= L p exp( s x) + s x + L o exp( s q) + s q (3) We train the model variants to minimize the position loss L p, see Equation 1, and orientation loss L o, see Equation 2, for the ground truth pose, where L p and L o are L 1 losses. We combine the position and orientation losses using the formulation by Kendall et al. [28] shown in Equation 3. Where s x and s q are learned parameters that control the balance between the position loss and the orientation loss.