Robust Navigation with Cross-Modal Fusion and Knowledge Transfer
Cai, Wenzhe, Cheng, Guangran, Kong, Lingyue, Dong, Lu, Sun, Changyin
–arXiv.org Artificial Intelligence
Recently, learning-based approaches show promising results in navigation tasks. However, the poor generalization capability and the simulation-reality gap prevent a wide range of applications. We consider the problem of improving the generalization of mobile robots and achieving sim-to-real transfer for navigation skills. To that end, we propose a cross-modal fusion method and a knowledge transfer framework for better generalization. This is realized by a teacher-student distillation architecture. The teacher learns a discriminative representation and the near-perfect policy in an ideal environment. By imitating the behavior and representation of the teacher, the student is able to align the features from noisy multi-modal input and reduce the influence of variations on navigation policy. We evaluate our method in simulated and real-world environments. Experiments show that our method outperforms the baselines by a large margin and achieves robust navigation performance with varying working conditions.
arXiv.org Artificial Intelligence
Sep-23-2023
- Country:
- Asia > China > Jiangsu Province > Nanjing (0.04)
- Genre:
- Research Report (1.00)
- Industry:
- Education (0.46)
- Transportation (0.47)
- Technology: