Goto

Collaborating Authors

 Rong, Can


DiffMove: Group Mobility Tendency Enhanced Trajectory Recovery via Diffusion Model

arXiv.org Artificial Intelligence

In the real world, trajectory data is often sparse and incomplete due to low collection frequencies or limited device coverage. Trajectory recovery aims to recover these missing trajectory points, making the trajectories denser and more complete. However, this task faces two key challenges: 1) The excessive sparsity of individual trajectories makes it difficult to effectively leverage historical information for recovery; 2) Sparse trajectories make it harder to capture complex individual mobility preferences. To address these challenges, we propose a novel method called DiffMove. Firstly, we harness crowd wisdom for trajectory recovery. Specifically, we construct a group tendency graph using the collective trajectories of all users and then integrate the group mobility trends into the location representations via graph embedding. This solves the challenge of sparse trajectories being unable to rely on individual historical trajectories for recovery. Secondly, we capture individual mobility preferences from both historical and current perspectives. Finally, we integrate group mobility tendencies and individual preferences into the spatiotemporal distribution of the trajectory to recover high-quality trajectories. Extensive experiments on two real-world datasets demonstrate that DiffMove outperforms existing state-of-the-art methods. Further analysis validates the robustness of our method.


One Fits All: General Mobility Trajectory Modeling via Masked Conditional Diffusion

arXiv.org Artificial Intelligence

Trajectory data play a crucial role in many applications, ranging from network optimization to urban planning. Existing studies on trajectory data are task-specific, and their applicability is limited to the specific tasks on which they have been trained, such as generation, recovery, or prediction. However, the potential of a unified model has not yet been fully explored in trajectory modeling. Although various trajectory tasks differ in inputs, outputs, objectives, and conditions, they share common mobility patterns. Based on these common patterns, we can construct a general framework that enables a single model to address different tasks. However, building a trajectory task-general framework faces two critical challenges: 1) the diversity in the formats of different tasks and 2) the complexity of the conditions imposed on different tasks. In this work, we propose a general trajectory modeling framework via masked conditional diffusion (named GenMove). Specifically, we utilize mask conditions to unify diverse formats. To adapt to complex conditions associated with different tasks, we utilize historical trajectory data to obtain contextual trajectory embeddings, which include rich contexts such as spatiotemporal characteristics and user preferences. Integrating the contextual trajectory embedding into diffusion models through a classifier-free guidance approach allows the model to flexibly adjust its outputs based on different conditions. Extensive experiments on mainstream tasks demonstrate that our model significantly outperforms state-of-the-art baselines, with the highest performance improvement exceeding 13% in generation tasks.


A Diffusive Data Augmentation Framework for Reconstruction of Complex Network Evolutionary History

arXiv.org Artificial Intelligence

The evolutionary processes of complex systems contain critical information regarding their functional characteristics. The generation time of edges provides insights into the historical evolution of various networked complex systems, such as protein-protein interaction networks, ecosystems, and social networks. Recovering these evolutionary processes holds significant scientific value, including aiding in the interpretation of the evolution of protein-protein interaction networks. However, existing methods are capable of predicting the generation times of remaining edges given a partial temporal network but often perform poorly in cross-network prediction tasks. These methods frequently fail in edge generation time recovery tasks for static networks that lack timestamps. In this work, we adopt a comparative paradigm-based framework that fuses multiple networks for training, enabling cross-network learning of the relationship between network structure and edge generation times. Compared to separate training, this approach yields an average accuracy improvement of 16.98%. Furthermore, given the difficulty in collecting temporal networks, we propose a novel diffusion-model-based generation method to produce a large number of temporal networks. By combining real temporal networks with generated ones for training, we achieve an additional average accuracy improvement of 5.46% through joint training.


Complexity-aware Large Scale Origin-Destination Network Generation via Diffusion Model

arXiv.org Artificial Intelligence

The Origin-Destination~(OD) networks provide an estimation of the flow of people from every region to others in the city, which is an important research topic in transportation, urban simulation, etc. Given structural regional urban features, generating the OD network has become increasingly appealing to many researchers from diverse domains. However, existing works are limited in independent generation of each OD pair, i.e., flow of people from one region to another, overlooking the relations within the overall network. In this paper, we instead propose to generate the OD network, and design a graph denoising diffusion method to learn the conditional joint probability distribution of the nodes and edges within the OD network given city characteristics at region level. To overcome the learning difficulty of the OD networks covering over thousands of regions, we decompose the original one-shot generative modeling of the diffusion model into two cascaded stages, corresponding to the generation of network topology and the weights of edges, respectively. To further reproduce important network properties contained in the city-wide OD network, we design an elaborated graph denoising network structure including a node property augmentation module and a graph transformer backbone. Empirical experiments on data collected in three large US cities have verified that our method can generate OD matrices for new cities with network statistics remarkably similar with the ground truth, further achieving superior outperformance over competitive baselines in terms of the generation realism.


Origin-Destination Network Generation via Gravity-Guided GAN

arXiv.org Artificial Intelligence

Origin-destination (OD) flow, which contains valuable population mobility information including direction and volume, is critical in many urban applications, such as urban planning, transportation management, etc. However, OD data is not always easy to access due to high costs or privacy concerns. Therefore, we must consider generating OD through mathematical models. Existing works utilize physics laws or machine learning (ML) models to build the association between urban structures and OD flows while these two kinds of methods suffer from the limitation of over-simplicity and poor generalization ability, respectively. In this paper, we propose to adopt physics-informed ML paradigm, which couple the physics scientific knowledge and data-driven ML methods, to construct a model named Origin-Destination Generation Networks (ODGN) for better population mobility modeling by leveraging the complementary strengths of combining physics and ML methods. Specifically, we first build a Multi-view Graph Attention Networks (MGAT) to capture the urban features of every region and then use a gravity-guided predictor to obtain OD flow between every two regions. Furthermore, we use a conditional GAN training strategy and design a sequence-based discriminator to consider the overall topological features of OD as a network. Extensive experiments on real-world datasets have been done to demonstrate the superiority of our proposed method compared with baselines.