Ma, Yueen
3D-MoE: A Mixture-of-Experts Multi-modal LLM for 3D Vision and Pose Diffusion via Rectified Flow
Ma, Yueen, Zhuang, Yuzheng, Hao, Jianye, King, Irwin
In recent years, 3D instruction-following data has become more common, and with the advent of large language models 3D vision and spatial reasoning have long been (LLMs), a range of multi-modal LLMs (MLLMs) has recognized as preferable for accurately perceiving emerged. Following the success of LLaVA (Liu et al., our three-dimensional world, especially when 2023a) for 2D images, recent approaches (e.g., LEO (Huang compared with traditional visual reasoning based et al., 2024) and ShapeLLM (Qi et al., 2024)) also integrate on 2D images. Due to the difficulties in collecting 3D encoders into LLMs through simple linear projection high-quality 3D data, research in this area has layers. Although these models handle tasks such as 3D only recently gained momentum. With the advent question answering, 3D dialogue, and some embodied tasks, of powerful large language models (LLMs), multimodal they devote relatively little attention to optimizing the LLM LLMs for 3D vision have been developed itself for multi-modal data.
A Survey on Vision-Language-Action Models for Embodied AI
Ma, Yueen, Song, Zixing, Zhuang, Yuzheng, Hao, Jianye, King, Irwin
Deep learning has demonstrated remarkable success across many domains, including computer vision, natural language processing, and reinforcement learning. Representative artificial neural networks in these fields span convolutional neural networks, Transformers, and deep Q-networks. Built upon unimodal neural networks, numerous multi-modal models have been introduced to address a range of tasks such as visual question answering, image captioning, and speech recognition. The rise of instruction-following robotic policies in embodied AI has spurred the development of a novel category of multi-modal models known as vision-language-action models (VLAs). Their multi-modality capability has become a foundational element in robot learning. Various methods have been proposed to enhance traits such as versatility, dexterity, and generalizability. Some models focus on refining specific components through pretraining. Others aim to develop control policies adept at predicting low-level actions. Certain VLAs serve as high-level task planners capable of decomposing long-horizon tasks into executable subtasks. Over the past few years, a myriad of VLAs have emerged, reflecting the rapid advancement of embodied AI. Therefore, it is imperative to capture the evolving landscape through a comprehensive survey.
VOLTA: Diverse and Controllable Question-Answer Pair Generation with Variational Mutual Information Maximizing Autoencoder
Ma, Yueen, Chi, Dafeng, Li, Jingjing, Zhuang, Yuzheng, Hao, Jianye, King, Irwin
Previous question-answer pair generation methods aimed to produce fluent and meaningful question-answer pairs but tend to have poor diversity. Recent attempts addressing this issue suffer from either low model capacity or overcomplicated architecture. Furthermore, they overlooked the problem where the controllability of their models is highly dependent on the input. In this paper, we propose a model named VOLTA that enhances generative diversity by leveraging the Variational Autoencoder framework with a shared backbone network as its encoder and decoder. In addition, we propose adding InfoGAN-style latent codes to enable input-independent controllability over the generation process. We perform comprehensive experiments and the results show that our approach can significantly improve diversity and controllability over state-of-the-art models.
Graph Component Contrastive Learning for Concept Relatedness Estimation
Ma, Yueen, Song, Zixing, Hu, Xuming, Li, Jingjing, Zhang, Yifei, King, Irwin
Concept relatedness estimation (CRE) aims to determine whether two given concepts are related. Existing methods only consider the pairwise relationship between concepts, while overlooking the higher-order relationship that could be encoded in a concept-level graph structure. We discover that this underlying graph satisfies a set of intrinsic properties of CRE, including reflexivity, commutativity, and transitivity. In this paper, we formalize the CRE properties and introduce a graph structure named ConcreteGraph. To address the data scarcity issue in CRE, we introduce a novel data augmentation approach to sample new concept pairs from the graph. As it is intractable for data augmentation to fully capture the structural information of the ConcreteGraph due to a large amount of potential concept pairs, we further introduce a novel Graph Component Contrastive Learning framework to implicitly learn the complete structure of the ConcreteGraph. Empirical results on three datasets show significant improvement over the state-of-the-art model. Detailed ablation studies demonstrate that our proposed approach can effectively capture the high-order relationship among concepts.