Goto

Collaborating Authors

 Chen, Zhenghan


Natural Humanoid Robot Locomotion with Generative Motion Prior

arXiv.org Artificial Intelligence

Natural and lifelike locomotion remains a fundamental challenge for humanoid robots to interact with human society. However, previous methods either neglect motion naturalness or rely on unstable and ambiguous style rewards. In this paper, we propose a novel Generative Motion Prior (GMP) that provides fine-grained motion-level supervision for the task of natural humanoid robot locomotion. To leverage natural human motions, we first employ whole-body motion retargeting to effectively transfer them to the robot. Subsequently, we train a generative model offline to predict future natural reference motions for the robot based on a conditional variational auto-encoder. During policy training, the generative motion prior serves as a frozen online motion generator, delivering precise and comprehensive supervision at the trajectory level, including joint angles and keypoint positions. The generative motion prior significantly enhances training stability and improves interpretability by offering detailed and dense guidance throughout the learning process. Experimental results in both simulation and real-world environments demonstrate that our method achieves superior motion naturalness compared to existing approaches. Project page can be found at https://sites.google.com/view/humanoid-gmp


Diffusion-Driven Generative Framework for Molecular Conformation Prediction

arXiv.org Artificial Intelligence

The task of deducing three-dimensional molecular configurations from their two-dimensional graph representations holds paramount importance in the fields of computational chemistry and pharmaceutical development. The rapid advancement of machine learning, particularly within the domain of deep generative networks, has revolutionized the precision of predictive modeling in this context. Traditional approaches often adopt a two-step strategy: initially estimating interatomic distances and subsequently refining the spatial molecular structure by solving a distance geometry problem. However, this sequential approach occasionally falls short in accurately capturing the intricacies of local atomic arrangements, thereby compromising the fidelity of the resulting structural models. Addressing these limitations, this research introduces a cutting-edge generative framework named \method{}. This framework is grounded in the principles of diffusion observed in classical non-equilibrium thermodynamics. \method{} views atoms as discrete entities and excels in guiding the reversal of diffusion, transforming a distribution of stochastic noise back into coherent molecular structures through a process akin to a Markov chain. This transformation commences with the initial representation of a molecular graph in an abstract latent space, culminating in the realization of three-dimensional structures via a sophisticated bilevel optimization scheme meticulously tailored to meet the specific requirements of the task. One of the formidable challenges in this modeling endeavor involves preserving roto-translational invariance to ensure that the generated molecular conformations adhere to the laws of physics. Extensive experimental evaluations confirm the efficacy of the proposed \method{} in comparison to state-of-the-art methods.


Domain Adaptive Graph Classification

arXiv.org Artificial Intelligence

Despite the remarkable accomplishments of graph neural networks (GNNs), they typically rely on task-specific labels, posing potential challenges in terms of their acquisition. Existing work have been made to address this issue through the lens of unsupervised domain adaptation, wherein labeled source graphs are utilized to enhance the learning process for target data. However, the simultaneous exploration of graph topology and reduction of domain disparities remains a substantial hurdle. In this paper, we introduce the Dual Adversarial Graph Representation Learning (DAGRL), which explore the graph topology from dual branches and mitigate domain discrepancies via dual adversarial learning. Our method encompasses a dual-pronged structure, consisting of a graph convolutional network branch and a graph kernel branch, which enables us to capture graph semantics from both implicit and explicit perspectives. Moreover, our approach incorporates adaptive perturbations into the dual branches, which align the source and target distribution to address domain discrepancies. Extensive experiments on a wild range graph classification datasets demonstrate the effectiveness of our proposed method.


Dynamic Spiking Graph Neural Networks

arXiv.org Artificial Intelligence

The integration of Spiking Neural Networks (SNNs) and Graph Neural Networks (GNNs) is gradually attracting attention due to the low power consumption and high efficiency in processing the non-Euclidean data represented by graphs. However, as a common problem, dynamic graph representation learning faces challenges such as high complexity and large memory overheads. Current work often uses SNNs instead of Recurrent Neural Networks (RNNs) by using binary features instead of continuous ones for efficient training, which would overlooks graph structure information and leads to the loss of details during propagation. Additionally, optimizing dynamic spiking models typically requires propagation of information across time steps, which increases memory requirements. To address these challenges, we present a framework named \underline{Dy}namic \underline{S}p\underline{i}king \underline{G}raph \underline{N}eural Networks (\method{}). To mitigate the information loss problem, \method{} propagates early-layer information directly to the last layer for information compensation. To accommodate the memory requirements, we apply the implicit differentiation on the equilibrium state, which does not rely on the exact reverse of the forward computation. While traditional implicit differentiation methods are usually used for static situations, \method{} extends it to the dynamic graph setting. Extensive experiments on three large-scale real-world dynamic graph datasets validate the effectiveness of \method{} on dynamic node classification tasks with lower computational costs.


Just-in-Time Security Patch Detection -- LLM At the Rescue for Data Augmentation

arXiv.org Artificial Intelligence

In the face of growing vulnerabilities found in open-source software, the need to identify {discreet} security patches has become paramount. The lack of consistency in how software providers handle maintenance often leads to the release of security patches without comprehensive advisories, leaving users vulnerable to unaddressed security risks. To address this pressing issue, we introduce a novel security patch detection system, LLMDA, which capitalizes on Large Language Models (LLMs) and code-text alignment methodologies for patch review, data enhancement, and feature combination. Within LLMDA, we initially utilize LLMs for examining patches and expanding data of PatchDB and SPI-DB, two security patch datasets from recent literature. We then use labeled instructions to direct our LLMDA, differentiating patches based on security relevance. Following this, we apply a PTFormer to merge patches with code, formulating hybrid attributes that encompass both the innate details and the interconnections between the patches and the code. This distinctive combination method allows our system to capture more insights from the combined context of patches and code, hence improving detection precision. Finally, we devise a probabilistic batch contrastive learning mechanism within batches to augment the capability of the our LLMDA in discerning security patches. The results reveal that LLMDA significantly surpasses the start of the art techniques in detecting security patches, underscoring its promise in fortifying software maintenance.