CTIN: Robust Contextual Transformer Network for Inertial Navigation
Rao, Bingbing, Kazemi, Ehsan, Ding, Yifan, Shila, Devu M, Tucker, Frank M., Wang, Liqiang
–arXiv.org Artificial Intelligence
Recently, data-driven inertial navigation approaches have demonstrated their capability of using well-trained neural networks to obtain accurate position estimates from inertial measurement units (IMU) measurements. In this paper, we propose a novel robust Contextual Transformer-based network for Inertial Navigation~(CTIN) to accurately predict velocity and trajectory. To this end, we first design a ResNet-based encoder enhanced by local and global multi-head self-attention to capture spatial contextual information from IMU measurements. Then we fuse these spatial representations with temporal knowledge by leveraging multi-head attention in the Transformer decoder. Finally, multi-task learning with uncertainty reduction is leveraged to improve learning efficiency and prediction accuracy of velocity and trajectory. Through extensive experiments over a wide range of inertial datasets~(e.g. RIDI, OxIOD, RoNIN, IDOL, and our own), CTIN is very robust and outperforms state-of-the-art models.
arXiv.org Artificial Intelligence
Dec-20-2021
- Country:
- North America > United States > Florida > Orange County > Orlando (0.14)
- Genre:
- Research Report (1.00)
- Industry:
- Information Technology (0.93)
- Technology: