Goto

Collaborating Authors

 Chen, Jingxiao


RHINO: Learning Real-Time Humanoid-Human-Object Interaction from Human Demonstrations

arXiv.org Artificial Intelligence

Figure 1: RHINO has the capabilities of real-time interaction on diverse tasks. Abstract--Humanoid robots have shown success in locomotion its effectiveness, flexibility, and safety in various scenarios. We summarize related works in each tasks in real-time. Some others focus on recognizing human category and highlight the differences between our work. The robot cannot be interrupted once a task Humanoid robots need to estimate the human physical and is in progress, and further human commands can only be mental states to provide appropriate assistance [35]. Object information in the complexity of human interactions [33, 37], but they often the environment also plays an important role in predicting suffer from high latency and are not suitable for real-time human intention by combining it with human motion. These limitations hinder robots from rapid interaction, such as pointing gestures [14] and grabbing interventions and robust, multi-step interactions in humancentered objects [24], provides a broader semantic space for human tasks. Most works on human intention recognition treat human-robot interaction with real-time intention recognition the interaction as a two-stage process, where the robot first and various skills is urgently needed to tackle the above predicts the human intention and then executes the task. Our work aims to react learning framework for Reactive Humanoid-human to human signals in real time, enabling the downstream tasks INteraction and Object Manipulation. RHINO decouples the to be interrupted at any time. In human-robot interaction and object manipulation skills based on predicted intentions. To ensure unique opportunity to learn natural motion from retargeted the scalability of RHINO across a wide range of skills, we human motion data [15]. Human motion data can be collected design a pipeline for learning the interactions from humanobject-human from motion capture systems or network videos.


Looking Ahead to Avoid Being Late: Solving Hard-Constrained Traveling Salesman Problem

arXiv.org Artificial Intelligence

Many real-world problems can be formulated as a constrained Traveling Salesman Problem (TSP). However, the constraints are always complex and numerous, making the TSPs challenging to solve. When the number of complicated constraints grows, it is time-consuming for traditional heuristic algorithms to avoid illegitimate outcomes. Learning-based methods provide an alternative to solve TSPs in a soft manner, which also supports GPU acceleration to generate solutions quickly. Nevertheless, the soft manner inevitably results in difficulty solving hard-constrained problems with learning algorithms, and the conflicts between legality and optimality may substantially affect the optimality of the solution. To overcome this problem and to have an effective solution against hard constraints, we proposed a novel learning-based method that uses looking-ahead information as the feature to improve the legality of TSP with Time Windows (TSPTW) solutions. Besides, we constructed TSPTW datasets with hard constraints in order to accurately evaluate and benchmark the statistical performance of various approaches, which can serve the community for future research. With comprehensive experiments on diverse datasets, MUSLA outperforms existing baselines and shows generalizability potential.


Offline Fictitious Self-Play for Competitive Games

arXiv.org Artificial Intelligence

Offline Reinforcement Learning (RL) has received significant interest due to its ability to improve policies in previously collected datasets without online interactions. Despite its success in the single-agent setting, offline multi-agent RL remains a challenge, especially in competitive games. Firstly, unaware of the game structure, it is impossible to interact with the opponents and conduct a major learning paradigm, self-play, for competitive games. Secondly, real-world datasets cannot cover all the state and action space in the game, resulting in barriers to identifying Nash equilibrium (NE). To address these issues, this paper introduces Off-FSP, the first practical model-free offline RL algorithm for competitive games. We start by simulating interactions with various opponents by adjusting the weights of the fixed dataset with importance sampling. This technique allows us to learn best responses to different opponents and employ the Offline Self-Play learning framework. In this framework, we further implement Fictitious Self-Play (FSP) to approximate NE. In partially covered real-world datasets, our methods show the potential to approach NE by incorporating any single-agent offline RL method. Experimental results in Leduc Hold'em Poker show that our method significantly improves performances compared with state-of-the-art baselines.


Quantifying Zero-shot Coordination Capability with Behavior Preferring Partners

arXiv.org Artificial Intelligence

Zero-shot coordination (ZSC) is a new challenge focusing on generalizing learned coordination skills to unseen partners. Existing methods train the ego agent with partners from pre-trained or evolving populations. The agent's ZSC capability is typically evaluated with a few evaluation partners, including humans and agents, and reported by mean returns. Current evaluation methods for ZSC capability still need improvement in constructing diverse evaluation partners and comprehensively measuring ZSC capability. In this paper, we aim to create a reliable, comprehensive, and efficient evaluation method for ZSC capability. We formally define the ideal'diversity-complete' evaluation partners and propose the best response (BR) diversity, which is the population diversity of the BRs to the partners, to approximate the ideal evaluation partners. We propose an evaluation workflow including'diversity-complete' evaluation partners construction and a multidimensional metric, the Best Response Proximity (BR-Prox) metric. We re-evaluate strong ZSC methods in the Overcooked environment using the proposed evaluation workflow. Surprisingly, the results in some of the most used layouts fail to distinguish the performance of different ZSC methods. Moreover, the evaluated ZSC methods lack the ability to produce enough diverse and high-performing training partners. Our proposed evaluation workflow calls for a change in how we efficiently evaluate ZSC methods as a supplement to human evaluation. Zero-shot Coordination (ZSC) is a new challenge in training an agent named ego agent to have the capability to coordinate with unseen partners in cooperative AI (Hu et al., 2020).


On Realization of Intelligent Decision-Making in the Real World: A Foundation Decision Model Perspective

arXiv.org Artificial Intelligence

The pervasive uncertainty and dynamic nature of real-world environments present significant challenges for the widespread implementation of machine-driven Intelligent Decision-Making (IDM) systems. Consequently, IDM should possess the ability to continuously acquire new skills and effectively generalize across a broad range of applications. The advancement of Artificial General Intelligence (AGI) that transcends task and application boundaries is critical for enhancing IDM. Recent studies have extensively investigated the Transformer neural architecture as a foundational model for various tasks, including computer vision, natural language processing, and reinforcement learning. We propose that a Foundation Decision Model (FDM) can be developed by formulating diverse decision-making tasks as sequence decoding tasks using the Transformer architecture, offering a promising solution for expanding IDM applications in complex real-world situations. In this paper, we discuss the efficiency and generalization improvements offered by a foundation decision model for IDM and explore its potential applications in multi-agent game AI, production scheduling, and robotics tasks. Lastly, we present a case study demonstrating our FDM implementation, DigitalBrain (DB1) with 1.3 billion parameters, achieving human-level performance in 870 tasks, such as text generation, image captioning, video game playing, robotic control, and traveling salesman problems. As a foundation decision model, DB1 represents an initial step toward more autonomous and efficient real-world IDM applications.


Honor of Kings Arena: an Environment for Generalization in Competitive Reinforcement Learning

arXiv.org Artificial Intelligence

This paper introduces Honor of Kings Arena, a reinforcement learning (RL) environment based on Honor of Kings, one of the world's most popular games at present. Compared to other environments studied in most previous work, ours presents new generalization challenges for competitive reinforcement learning. It is a multi-agent problem with one agent competing against its opponent; and it requires the generalization ability as it has diverse targets to control and diverse opponents to compete with. We describe the observation, action, and reward specifications for the Honor of Kings domain and provide an open-source Python-based interface for communicating with the game engine. We provide twenty target heroes with a variety of tasks in Honor of Kings Arena and present initial baseline results for RL-based methods with feasible computing resources. Finally, we showcase the generalization challenges imposed by Honor of Kings Arena and possible remedies to the challenges. All of the software, including the environment-class, are publicly available at https://github.com/tencent-ailab/hok_env . The documentation is available at https://aiarena.tencent.com/hok/doc/ .


Efficient Policy Space Response Oracles

arXiv.org Artificial Intelligence

Policy Space Response Oracle method (PSRO) provides a general solution to Nash equilibrium in two-player zero-sum games but suffers from two problems: (1) the computation inefficiency due to consistently evaluating current populations by simulations; and (2) the exploration inefficiency due to learning best responses against a fixed meta-strategy at each iteration. In this work, we propose Efficient PSRO (EPSRO) that largely improves the efficiency of the above two steps. Central to our development is the newly-introduced subroutine of minimax optimization on unrestricted-restricted (URR) games. By solving URR at each step, one can evaluate the current game and compute the best response in one forward pass with no need for game simulations. Theoretically, we prove that the solution procedures of EPSRO offer a monotonic improvement on exploitability. Moreover, a desirable property of EPSRO is that it is parallelizable, this allows for efficient exploration in the policy space that induces behavioral diversity. We test EPSRO on three classes of games, and report a 50x speedup in wall-time, 10x data efficiency, and similar exploitability as existing PSRO methods on Kuhn and Leduc Poker games.


Generating Strategic Dialogue for Negotiation with Theory of Mind

arXiv.org Artificial Intelligence

We propose a framework to integrate the concept of Theory of Mind (ToM) into generating utterances for task-oriented dialogue. Our approach explores the ability to model and infer personality types of opponents, predicts their responses, and uses this information to adapt the agent's high-level strategy in negotiation tasks. We introduce a probabilistic formulation for the first-order theory of mind and test our approach on the CraigslistBargain dataset. Experiments show that our method using ToM inference achieves a 40\% higher dialogue agreement rate compared to baselines on a mixed population of opponents. We also show that our model displays diverse negotiation behavior with different types of opponents.