Chen, Xuechao
A 3-Step Optimization Framework with Hybrid Models for a Humanoid Robot's Jump Motion
Qi, Haoxiang, Yu, Zhangguo, Chen, Xuechao, Liu, Yaliang, Yi, Chuanku, Dong, Chencheng, Meng, Fei, Huang, Qiang
High dynamic jump motions are challenging tasks for humanoid robots to achieve environment adaptation and obstacle crossing. The trajectory optimization is a practical method to achieve high-dynamic and explosive jumping. This paper proposes a 3-step trajectory optimization framework for generating a jump motion for a humanoid robot. To improve iteration speed and achieve ideal performance, the framework comprises three sub-optimizations. The first optimization incorporates momentum, inertia, and center of pressure (CoP), treating the robot as a static reaction momentum pendulum (SRMP) model to generate corresponding trajectories. The second optimization maps these trajectories to joint space using effective Quadratic Programming (QP) solvers. Finally, the third optimization generates whole-body joint trajectories utilizing trajectories generated by previous parts. With the combined consideration of momentum and inertia, the robot achieves agile forward jump motions. A simulation and experiments (Fig. \ref{Fig First page fig}) of forward jump with a distance of 1.0 m and 0.5 m height are presented in this paper, validating the applicability of the proposed framework.
LIKO: LiDAR, Inertial, and Kinematic Odometry for Bipedal Robots
Zhao, Qingrui, Li, Mingyuan, Shi, Yongliang, Chen, Xuechao, Yu, Zhangguo, Han, Lianqiang, Fu, Zhenyuan, Zhang, Jintao, Li, Chao, Zhang, Yuanxi, Huang, Qiang
High-frequency and accurate state estimation is crucial for biped robots. This paper presents a tightly-coupled LiDAR-Inertial-Kinematic Odometry (LIKO) for biped robot state estimation based on an iterated extended Kalman filter. Beyond state estimation, the foot contact position is also modeled and estimated. This allows for both position and velocity updates from kinematic measurement. Additionally, the use of kinematic measurement results in an increased output state frequency of about 1kHz. This ensures temporal continuity of the estimated state and makes it practical for control purposes of biped robots. We also announce a biped robot dataset consisting of LiDAR, inertial measurement unit (IMU), joint encoders, force/torque (F/T) sensors, and motion capture ground truth to evaluate the proposed method. The dataset is collected during robot locomotion, and our approach reached the best quantitative result among other LIO-based methods and biped robot state estimation algorithms. The dataset and source code will be available at https://github.com/Mr-Zqr/LIKO.
SVQNet: Sparse Voxel-Adjacent Query Network for 4D Spatio-Temporal LiDAR Semantic Segmentation
Chen, Xuechao, Xu, Shuangjie, Zou, Xiaoyi, Cao, Tongyi, Yeung, Dit-Yan, Fang, Lu
LiDAR-based semantic perception tasks are critical yet challenging for autonomous driving. Due to the motion of objects and static/dynamic occlusion, temporal information plays an essential role in reinforcing perception by enhancing and completing single-frame knowledge. Previous approaches either directly stack historical frames to the current frame or build a 4D spatio-temporal neighborhood using KNN, which duplicates computation and hinders realtime performance. Based on our observation that stacking all the historical points would damage performance due to a large amount of redundant and misleading information, we propose the Sparse Voxel-Adjacent Query Network (SVQNet) for 4D LiDAR semantic segmentation. To take full advantage of the historical frames high-efficiently, we shunt the historical points into two groups with reference to the current points. One is the Voxel-Adjacent Neighborhood carrying local enhancing knowledge. The other is the Historical Context completing the global knowledge. Then we propose new modules to select and extract the instructive features from the two groups. Our SVQNet achieves state-of-the-art performance in LiDAR semantic segmentation of the SemanticKITTI benchmark and the nuScenes dataset.