Li, Peiyan
MM-RLHF: The Next Step Forward in Multimodal LLM Alignment
Zhang, Yi-Fan, Yu, Tao, Tian, Haochen, Fu, Chaoyou, Li, Peiyan, Zeng, Jianshu, Xie, Wulin, Shi, Yang, Zhang, Huanyu, Wu, Junkang, Wang, Xue, Hu, Yibo, Wen, Bin, Yang, Fan, Zhang, Zhang, Gao, Tingting, Zhang, Di, Wang, Liang, Jin, Rong, Tan, Tieniu
Despite notable advancements in Multimodal Large Language Models (MLLMs), most state-of-the-art models have not undergone thorough alignment with human preferences. This gap exists because current alignment research has primarily achieved progress in specific areas (e.g., hallucination reduction), while the broader question of whether aligning models with human preferences can systematically enhance MLLM capability remains largely unexplored. To this end, we introduce MM-RLHF, a dataset containing $\mathbf{120k}$ fine-grained, human-annotated preference comparison pairs. This dataset represents a substantial advancement over existing resources, offering superior size, diversity, annotation granularity, and quality. Leveraging this dataset, we propose several key innovations to improve both the quality of reward models and the efficiency of alignment algorithms. Notably, we introduce a Critique-Based Reward Model, which generates critiques of model outputs before assigning scores, offering enhanced interpretability and more informative feedback compared to traditional scalar reward mechanisms. Additionally, we propose Dynamic Reward Scaling, a method that adjusts the loss weight of each sample according to the reward signal, thereby optimizing the use of high-quality comparison pairs. Our approach is rigorously evaluated across $\mathbf{10}$ distinct dimensions and $\mathbf{27}$ benchmarks, with results demonstrating significant and consistent improvements in model performance. Specifically, fine-tuning LLaVA-ov-7B with MM-RLHF and our alignment algorithm leads to a $\mathbf{19.5}$% increase in conversational abilities and a $\mathbf{60}$% improvement in safety. We have open-sourced the preference dataset, reward model, training and evaluation code, as well as reward modeling and safety benchmarks. For more details, please visit our project page: https://mm-rlhf.github.io.
Towards Generalist Robot Policies: What Matters in Building Vision-Language-Action Models
Li, Xinghang, Li, Peiyan, Liu, Minghuan, Wang, Dong, Liu, Jirong, Kang, Bingyi, Ma, Xiao, Kong, Tao, Zhang, Hanbo, Liu, Huaping
By injecting action components into the VLMs, Vision-Language-Action models (VLAs) can be naturally formed and also show promising performance. Existing work has demonstrated the effectiveness and generalization of VLAs in multiple scenarios and tasks. Nevertheless, the transfer from VLMs to VLAs is not trivial since existing VLAs differ in their backbones, action-prediction formulations, data distributions, and training recipes. This leads to a missing piece for a systematic understanding of the design choices of VLAs. In this work, we disclose the key factors that significantly influence the performance of VLA and focus on answering three essential design choices: which backbone to select, how to formulate the VLA architectures, and when to add cross-embodiment data. The obtained results convince us firmly to explain why we prefer VLA and develop a new family of VLAs, RoboVLMs, which require very few manual designs and achieve a new state-of-the-art performance in three simulation tasks and real-world experiments. Through our extensive experiments, which include over 8 VLM backbones, 4 policy architectures, and over 600 distinct designed experiments, we provide a detailed guidebook for the future design of VLAs. In addition to the study, the highly flexible RoboVLMs framework, which supports easy integrations of new VLMs and free combinations of various design choices, is made public to facilitate future research.
Leveraging Large Language Model for Heterogeneous Ad Hoc Teamwork Collaboration
Liu, Xinzhu, Li, Peiyan, Yang, Wenju, Guo, Di, Liu, Huaping
Abstract--Compared with the widely investigated homogeneous multi-robot collaboration, heterogeneous robots with different capabilities can provide a more efficient and flexible collaboration for more complex tasks. In this paper, we consider a more challenging heterogeneous ad hoc teamwork collaboration problem where an ad hoc robot joins an existing heterogeneous team for a shared goal. Specifically, the ad hoc robot collaborates with unknown teammates without prior coordination, and it is expected to generate an appropriate cooperation policy to improve the efficiency of the whole team. To solve this challenging problem, we leverage the remarkable potential of the large language model (LLM) to establish a decentralized heterogeneous ad hoc teamwork collaboration framework that focuses on generating reasonable policy for an ad hoc robot to collaborate with original heterogeneous teammates. A training-free hierarchical dynamic planner is developed using the LLM together with the newly proposed Interactive Reflection of Thoughts (IRoT) method for the ad hoc agent to adapt to different teams. Then, the new team collaborates and finally finishes the task. Imagine after a natural disaster such as an earthquake or team at any time from any location, and then a heterogeneous hurricane, a team of robots is dispatched for the rescue task. Since the situation of a disaster site is complex, robots of During the past years, the multi-robot collaboration task different capabilities may be required for the rescue. These has been widely investigated, and a bunch of multi-agent robots are likely to be brought from different places and thus embodied tasks are proposed where multiple agents learn arrive at the site at different times. The coming robot doesn't proper strategies to collaborate efficiently [17, 18, 22, 23, 42, have any prior information on existing teammates, and it is 44, 45, 52] and solve complex embodied tasks [27, 35]. All expected to collaborate efficiently and robustly with previously these works only consider homogeneous agents with the same unknown teammates for the same goal. However, in real-world applications, the robots describes a typical heterogeneous ad hoc teamwork, and the may be faced with more complicated situations such as seismic new coming robot is called an ad hoc robot. It is necessary to leverage heterogeneous ad hoc teamwork collaboration is demonstrated robots with different capabilities to accomplish the task better in Figure 1, where heterogeneous robots of different capabilities [14, 19, 36, 37, 41]. Meanwhile, the ad hoc teamwork can compose any team, and the original heterogeneous team collaboration is an important problem in the heterogeneous collaborates to execute a task. An ad hoc robot could join this multi-robot collaboration, which has been rarely addressed. Beijing University of Posts and Telecommunications, Beijing, China.
Demonstrating HumanTHOR: A Simulation Platform and Benchmark for Human-Robot Collaboration in a Shared Workspace
Wang, Chenxu, Du, Boyuan, Xu, Jiaxin, Li, Peiyan, Guo, Di, Liu, Huaping
Human-robot collaboration (HRC) in a shared workspace has become a common pattern in real-world robot applications and has garnered significant research interest. However, most existing studies for human-in-the-loop (HITL) collaboration with robots in a shared workspace evaluate in either simplified game environments or physical platforms, falling short in limited realistic significance or limited scalability. To support future studies, we build an embodied framework named HumanTHOR, which enables humans to act in the simulation environment through VR devices to support HITL collaborations in a shared workspace. To validate our system, we build a benchmark of everyday tasks and conduct a preliminary user study with two baseline algorithms. The results show that the robot can effectively assist humans in collaboration, demonstrating the significance of HRC. The comparison among different levels of baselines affirms that our system can adequately evaluate robot capabilities and serve as a benchmark for different robot algorithms. The experimental results also indicate that there is still much room in the area and our system can provide a preliminary foundation for future HRC research in a shared workspace. More information about the simulation environment, experiment videos, benchmark descriptions, and additional supplementary materials can be found on the website: https://sites.google.com/view/humanthor/.
A Quick Framework for Evaluating Worst Robustness of Complex Networks
Jiang, Wenjun, Li, Peiyan, Fan, Tianlong, Li, Ting, Zhang, Chuan-fu, Zhang, Tao, Luo, Zong-fu
Robustness is pivotal for comprehending, designing, optimizing, and rehabilitating networks, with simulation attacks being the prevailing evaluation method. Simulation attacks are often time-consuming or even impractical, however, a more crucial yet persistently overlooked drawback is that any attack strategy merely provides a potential paradigm of disintegration. The key concern is: in the worst-case scenario or facing the most severe attacks, what is the limit of robustness, referred to as ``Worst Robustness'', for a given system? Understanding a system's worst robustness is imperative for grasping its reliability limits, accurately evaluating protective capabilities, and determining associated design and security maintenance costs. To address these challenges, we introduce the concept of Most Destruction Attack (MDA) based on the idea of knowledge stacking. MDA is employed to assess the worst robustness of networks, followed by the application of an adapted CNN algorithm for rapid worst robustness prediction. We establish the logical validity of MDA and highlight the exceptional performance of the adapted CNN algorithm in predicting the worst robustness across diverse network topologies, encompassing both model and empirical networks.