SemanticBoost: Elevating Motion Generation with Augmented Textual Cues
He, Xin, Huang, Shaoli, Zhan, Xiaohang, Weng, Chao, Shan, Ying
–arXiv.org Artificial Intelligence
Current techniques face difficulties in generating motions from intricate semantic descriptions, primarily due to insufficient semantic annotations in datasets and weak contextual understanding. To address these issues, we present SemanticBoost, a novel framework that tackles both challenges simultaneously. Our framework comprises a Semantic Enhancement module and a Context-Attuned Motion Denoiser (CAMD). On the other hand, the CAMD approach provides an all-encompassing solution for generating high-quality, semantically consistent motion sequences by effectively capturing context information and aligning the generated motion with the given textual descriptions. Distinct from existing methods, our approach can synthesize accurate orientational movements, combined motions based on specific body part descriptions, and motions generated from complex, extended sentences. Our experimental results demonstrate that SemanticBoost, as a diffusion-based method, outperforms auto-regressive-based techniques, achieving cutting-edge performance on the Humanml3D dataset while maintaining realistic and smooth motion generation quality. Over recent years, motion generation from textual descriptions has made significant progress Zhang et al. (2023a); Chen et al. (2022); Jiang et al. (2023); Zhang et al. (2023b), enhancing creativity and realism in applications like animation, robotics, and virtual reality. However, generating motion from complex semantic descriptions remains challenging due to the lack of comprehensive semantic annotations in datasets like Humanml3D Guo et al. (2022a) and the limited contextual understanding of existing techniques.
arXiv.org Artificial Intelligence
Nov-28-2023