Goto

Collaborating Authors

 Li, Anqi


HAMSTER: Hierarchical Action Models For Open-World Robot Manipulation

arXiv.org Artificial Intelligence

Large foundation models have shown strong open-world generalization to complex problems in vision and language, but similar levels of generalization have yet to be achieved in robotics. One fundamental challenge is the lack of robotic data, which are typically obtained through expensive on-robot operation. A promising remedy is to leverage cheaper, off-domain data such as action-free videos, hand-drawn sketches or simulation data. In this work, we posit that hierarchical vision-language-action (VLA) models can be more effective in utilizing off-domain data than standard monolithic VLA models that directly finetune vision-language models (VLMs) to predict actions. In particular, we study a class of hierarchical VLA models, where the high-level VLM is finetuned to produce a coarse 2D path indicating the desired robot end-effector trajectory given an RGB image and a task description. The intermediate 2D path prediction is then served as guidance to the low-level, 3D-aware control policy capable of precise manipulation. Doing so alleviates the high-level VLM from fine-grained action prediction, while reducing the low-level policy's burden on complex task-level reasoning. We show that, with the hierarchical design, the high-level VLM can transfer across significant domain gaps between the off-domain finetuning data and real-robot testing scenarios, including differences on embodiments, dynamics, visual appearances and task semantics, etc. In the real-robot experiments, we observe an average of 20% improvement in success rate across seven different axes of generalization over OpenVLA, representing a 50% relative gain. Visual results are provided at: https://hamster-robot.github.io/


Cosmos World Foundation Model Platform for Physical AI

arXiv.org Artificial Intelligence

Physical AI needs to be trained digitally first. It needs a digital twin of itself, the policy model, and a digital twin of the world, the world model. In this paper, we present the Cosmos World Foundation Model Platform to help developers build customized world models for their Physical AI setups. We position a world foundation model as a general-purpose world model that can be fine-tuned into customized world models for downstream applications. Our platform covers a video curation pipeline, pre-trained world foundation models, examples of post-training of pre-trained world foundation models, and video tokenizers. To help Physical AI builders solve the most critical problems of our society, we make our platform open-source and our models open-weight with permissive licenses available via https://github.com/NVIDIA/Cosmos.


Understanding the Therapeutic Relationship between Counselors and Clients in Online Text-based Counseling using LLMs

arXiv.org Artificial Intelligence

Robust therapeutic relationships between counselors and clients are fundamental to counseling effectiveness. The assessment of therapeutic alliance is well-established in traditional face-to-face therapy but may not directly translate to text-based settings. With millions of individuals seeking support through online text-based counseling, understanding the relationship in such contexts is crucial. In this paper, we present an automatic approach using large language models (LLMs) to understand the development of therapeutic alliance in text-based counseling. We adapt a theoretically grounded framework specifically to the context of online text-based counseling and develop comprehensive guidelines for characterizing the alliance. We collect a comprehensive counseling dataset and conduct multiple expert evaluations on a subset based on this framework. Our LLM-based approach, combined with guidelines and simultaneous extraction of supportive evidence underlying its predictions, demonstrates effectiveness in identifying the therapeutic alliance. Through further LLM-based evaluations on additional conversations, our findings underscore the challenges counselors face in cultivating strong online relationships with clients. Furthermore, we demonstrate the potential of LLM-based feedback mechanisms to enhance counselors' ability to build relationships, supported by a small-scale proof-of-concept.


SGRU: A High-Performance Structured Gated Recurrent Unit for Traffic Flow Prediction

arXiv.org Artificial Intelligence

By incorporating attention The scenario of multivariate time series occurs in various mechanisms into GCN, it is possible to distinguish the domains of life. Researchers utilize historical weather data importance of different nodes and utilize Gated Recurrent from different regions to predict future rainfall intensity [1]. Units (GRU) for frequency domain feature extraction on long Taxi data is used to help cities predict travel resources time series, leading to a substantial improvement in prediction and reduce traffic congestion [2]. Exception monitoring is accuracy [7]. The aforementioned work demonstrates the performed on various states in manufacturing systems and effectiveness of GCN in learning spatial features and its strong internet services [3]. Since the introduction of Graph Convolutional adaptability to non-Euclidean structured data. Networks (GCN) in 2017 [4], this method has been widely used in the field of Multivariate Time Series (MTS) A. Problem 1: Dilated convolutions break adjacent time steps for spatial semi-supervised and self-supervised learning. By Focusing on the direction of traffic flow prediction (Figure interleaving one-dimensional convolution with gated linear 1,2), recent Temporal Convolutional Networks [8] and units (GLU) and graph convolution, and appending an output Graph WaveNet (GWNet) [9] have adopted the mechanism layer after this "sandwich" structure [5], accurate prediction of GCN. However, in the frequency domain, they use dilated of traffic flow speed can be achieved.


Adversarially-Robust Inference on Trees via Belief Propagation

arXiv.org Machine Learning

We introduce and study the problem of posterior inference on tree-structured graphical models in the presence of a malicious adversary who can corrupt some observed nodes. In the well-studied broadcasting on trees model, corresponding to the ferromagnetic Ising model on a $d$-regular tree with zero external field, when a natural signal-to-noise ratio exceeds one (the celebrated Kesten-Stigum threshold), the posterior distribution of the root given the leaves is bounded away from $\mathrm{Ber}(1/2)$, and carries nontrivial information about the sign of the root. This posterior distribution can be computed exactly via dynamic programming, also known as belief propagation. We first confirm a folklore belief that a malicious adversary who can corrupt an inverse-polynomial fraction of the leaves of their choosing makes this inference impossible. Our main result is that accurate posterior inference about the root vertex given the leaves is possible when the adversary is constrained to make corruptions at a $\rho$-fraction of randomly-chosen leaf vertices, so long as the signal-to-noise ratio exceeds $O(\log d)$ and $\rho \leq c \varepsilon$ for some universal $c > 0$. Since inference becomes information-theoretically impossible when $\rho \gg \varepsilon$, this amounts to an information-theoretically optimal fraction of corruptions, up to a constant multiplicative factor. Furthermore, we show that the canonical belief propagation algorithm performs this inference.


General Method for Solving Four Types of SAT Problems

arXiv.org Artificial Intelligence

Existing methods provide varying algorithms for different types of Boolean satisfiability problems (SAT), lacking a general solution framework. Accordingly, this study proposes a unified framework DCSAT based on integer programming and reinforcement learning (RL) algorithm to solve different types of SAT problems such as MaxSAT, Weighted MaxSAT, PMS, WPMS. Specifically, we first construct a consolidated integer programming representation for four types of SAT problems by adjusting objective function coefficients. Secondly, we construct an appropriate reinforcement learning models based on the 0-1 integer programming for SAT problems. Based on the binary tree search structure, we apply the Monte Carlo tree search (MCTS) method on SAT problems. Finally, we prove that this method can find all optimal Boolean assignments based on Wiener-khinchin law of large Numbers. We experimentally verify that this paradigm can prune the unnecessary search space to find the optimal Boolean assignments for the problem. Furthermore, the proposed method can provide diverse labels for supervised learning methods for SAT problems.


PsyChat: A Client-Centric Dialogue System for Mental Health Support

arXiv.org Artificial Intelligence

Dialogue systems are increasingly integrated into mental health support to help clients facilitate exploration, gain insight, take action, and ultimately heal themselves. For a dialogue system to be practical and user-friendly, it should be client-centric, focusing on the client's behaviors. However, existing dialogue systems publicly available for mental health support often concentrate solely on the counselor's strategies rather than the behaviors expressed by clients. This can lead to the implementation of unreasonable or inappropriate counseling strategies and corresponding responses from the dialogue system. To address this issue, we propose PsyChat, a client-centric dialogue system that provides psychological support through online chat. The client-centric dialogue system comprises five modules: client behavior recognition, counselor strategy selection, input packer, response generator intentionally fine-tuned to produce responses, and response selection. Both automatic and human evaluations demonstrate the effectiveness and practicality of our proposed dialogue system for real-life mental health support. Furthermore, we employ our proposed dialogue system to simulate a real-world client-virtual-counselor interaction scenario. The system is capable of predicting the client's behaviors, selecting appropriate counselor strategies, and generating accurate and suitable responses, as demonstrated in the scenario.


Model Predictive Control for Aggressive Driving Over Uneven Terrain

arXiv.org Artificial Intelligence

Terrain traversability in off-road autonomy has traditionally relied on semantic classification or resource-intensive dynamics models to capture vehicle-terrain interactions. However, our experiences in the development of a high-speed off-road platform have revealed several critical challenges that are not adequately addressed by current methods at our operating speeds of 7--10 m/s. This study focuses particularly on uneven terrain geometries such as hills, banks, and ditches. These common high-risk geometries are capable of disabling the vehicle and causing severe passenger injuries if poorly traversed. We introduce a physics-based framework for identifying traversability constraints on terrain dynamics. Using this framework, we then derive two fundamental constraints, with a primary focus on mitigating rollover and ditch-crossing failures. In addition, we present the design of our planning and control system, which uses Model Predictive Control (MPC) and a low-level controller to enable the fast and efficient computation of these constraints to meet the demands of our aggressive driving. Through real-world experimentation and traversal of hills and ditches, our approach is tested and benchmarked against a human expert. These results demonstrate that our approach captures fundamental elements of safe and aggressive control on these terrain features.


PsyBench: a balanced and in-depth Psychological Chinese Evaluation Benchmark for Foundation Models

arXiv.org Artificial Intelligence

As Large Language Models (LLMs) are becoming prevalent in various fields, there is an urgent need for improved NLP benchmarks that encompass all the necessary knowledge of individual discipline. Many contemporary benchmarks for foundational models emphasize a broad range of subjects but often fall short in presenting all the critical subjects and encompassing necessary professional knowledge of them. This shortfall has led to skewed results, given that LLMs exhibit varying performance across different subjects and knowledge areas. To address this issue, we present psybench, the first comprehensive Chinese evaluation suite that covers all the necessary knowledge required for graduate entrance exams. psybench offers a deep evaluation of a model's strengths and weaknesses in psychology through multiple-choice questions. Our findings show significant differences in performance across different sections of a subject, highlighting the risk of skewed results when the knowledge in test sets is not balanced. Notably, only the ChatGPT model reaches an average accuracy above $70\%$, indicating that there is still plenty of room for improvement. We expect that psybench will help to conduct thorough evaluations of base models' strengths and weaknesses and assist in practical application in the field of psychology.


Survival Instinct in Offline Reinforcement Learning

arXiv.org Artificial Intelligence

We present a novel observation about the behavior of offline reinforcement learning (RL) algorithms: on many benchmark datasets, offline RL can produce well-performing and safe policies even when trained with "wrong" reward labels, such as those that are zero everywhere or are negatives of the true rewards. This phenomenon cannot be easily explained by offline RL's return maximization objective. Moreover, it gives offline RL a degree of robustness that is uncharacteristic of its online RL counterparts, which are known to be sensitive to reward design. We demonstrate that this surprising robustness property is attributable to an interplay between the notion of pessimism in offline RL algorithms and certain implicit biases in common data collection practices. As we prove in this work, pessimism endows the agent with a "survival instinct", i.e., an incentive to stay within the data support in the long term, while the limited and biased data coverage further constrains the set of survival policies. Formally, given a reward class -- which may not even contain the true reward -- we identify conditions on the training data distribution that enable offline RL to learn a near-optimal and safe policy from any reward within the class. We argue that the survival instinct should be taken into account when interpreting results from existing offline RL benchmarks and when creating future ones. Our empirical and theoretical results suggest a new paradigm for RL, whereby an agent is nudged to learn a desirable behavior with imperfect reward but purposely biased data coverage.