Goto

Collaborating Authors

 mobility data


GPS-MTM: Capturing Pattern of Normalcy in GPS-Trajectories with self-supervised learning

Garg, Umang, Zhang, Bowen, Subrahmanya, Anantajit, Gudavalli, Chandrakanth, Manjunath, BS

arXiv.org Artificial Intelligence

Foundation models have driven remarkable progress in text, vision, and video understanding, and are now poised to unlock similar breakthroughs in trajectory modeling. We introduce the GPSMasked Trajectory Transformer (GPS-MTM), a foundation model for large-scale mobility data that captures patterns of normalcy in human movement. Unlike prior approaches that flatten trajectories into coordinate streams, GPS-MTM decomposes mobility into two complementary modalities: states (point-of-interest categories) and actions (agent transitions). Leveraging a bi-directional Transformer with a self-supervised masked modeling objective, the model reconstructs missing segments across modalities, enabling it to learn rich semantic correlations without manual labels. Across benchmark datasets, including Numosim-LA, Urban Anomalies, and Geolife, GPS-MTM consistently outperforms on downstream tasks such as trajectory infilling and next-stop prediction. Its advantages are most pronounced in dynamic tasks (inverse and forward dynamics), where contextual reasoning is critical. These results establish GPS-MTM as a robust foundation model for trajectory analytics, positioning mobility data as a first-class modality for large-scale representation learning. Code is released for further reference.


GSTM-HMU: Generative Spatio-Temporal Modeling for Human Mobility Understanding

Luo, Wenying, Lin, Zhiyuan, Xu, Wenhao, Liu, Minghao, Li, Zhi

arXiv.org Artificial Intelligence

Human mobility traces, often recorded as sequences of check-ins, provide a unique window into both short-term visiting patterns and persistent lifestyle regularities. In this work we introduce GSTM-HMU, a generative spatio-temporal framework designed to advance mobility analysis by explicitly modeling the semantic and temporal complexity of human movement. The framework consists of four key innovations. First, a Spatio-Temporal Concept Encoder (STCE) integrates geographic location, POI category semantics, and periodic temporal rhythms into unified vector representations. Second, a Cognitive Trajectory Memory (CTM) adaptively filters historical visits, emphasizing recent and behaviorally salient events in order to capture user intent more effectively. Third, a Lifestyle Concept Bank (LCB) contributes structured human preference cues, such as activity types and lifestyle patterns, to enhance interpretability and personalization. Finally, task-oriented generative heads transform the learned representations into predictions for multiple downstream tasks. We conduct extensive experiments on four widely used real-world datasets, including Gowalla, WeePlace, Brightkite, and FourSquare, and evaluate performance on three benchmark tasks: next-location prediction, trajectory-user identification, and time estimation. The results demonstrate consistent and substantial improvements over strong baselines, confirming the effectiveness of GSTM-HMU in extracting semantic regularities from complex mobility data. Beyond raw performance gains, our findings also suggest that generative modeling provides a promising foundation for building more robust, interpretable, and generalizable systems for human mobility intelligence.


Data-Driven Discovery of Mobility Periodicity for Understanding Urban Systems

Chen, Xinyu, Wang, Qi, Zheng, Yunhan, Cao, Nina, Cai, HanQin, Zhao, Jinhua

arXiv.org Artificial Intelligence

Human mobility regularity is crucial for understanding urban dynamics and informing decision-making processes. This study first quantifies the periodicity in complex human mobility data as a sparse identification of dominant positive auto-correlations in time series autoregression and then discovers periodic patterns. We apply the framework to large-scale metro passenger flow data in Hangzhou, China and multi-modal mobility data in New York City and Chicago, USA, revealing the interpretable weekly periodicity across different spatial locations over past several years. The analysis of ridesharing data from 2019 to 2024 demonstrates the disruptive impact of the pandemic on mobility regularity and the subsequent recovery trends. In 2024, the periodic mobility patterns of ridesharing, taxi, subway, and bikesharing in Manhattan uncover the regularity and variability of these travel modes. Our findings highlight the potential of interpretable machine learning to discover spatiotemporal mobility patterns and offer a valuable tool for understanding urban systems.


Entropy-Driven Curriculum for Multi-Task Training in Human Mobility Prediction

Fang, Tianye, Luo, Xuanshu, Werner, Martin

arXiv.org Artificial Intelligence

--The increasing availability of big mobility data from ubiquitous portable devices enables human mobility prediction through deep learning approaches. However, the diverse complexity of human mobility data impedes model training, leading to inefficient gradient updates and potential underfitting. This paper presents a unified training framework that integrates entropy-driven curriculum and multi-task learning to address these challenges. The proposed entropy-driven curriculum learning strategy quantifies trajectory predictability based on Lempel-Ziv compression and organizes training from simple to complex for faster convergence and enhanced performance. The multi-task training simultaneously optimizes the primary location prediction alongside auxiliary estimation of movement distance and direction for learning realistic mobility patterns, and improve prediction accuracy through complementary supervision signals. Extensive experiments conducted in accordance with the HuMob Challenge demonstrate that our approach achieves state-of-the-art performance on GEO-BLEU (0.354) and DTW (26.15) metrics with up to 2.92-fold convergence speed compared to training without curriculum learning. The inherent regularity of human mobility data, which exhibits predictability of individual mobility patterns across diverse populations and travel distances [1], provides the foundation for numerous location-based applications, including urban planning and management, transportation optimization, epidemic modeling, and recommendation systems [2]-[7]. With the proliferation of pervasive user devices with passive location acquisition capabilities, unprecedented volumes of human mobility data have been collected, enabling data-driven approaches, particularly sequential deep learning models, to effectively extract human mobility patterns [8]-[11]. In comparison to handcrafted pattern matching [12]-[14] and Markov models [15]-[17], deep learning methods generally achieve superior long-term prediction performance.


Urban delineation through the lens of commute networks: Leveraging graph embeddings to distinguish socioeconomic groups in cities

Khulbe, Devashish, Sobolevsky, Stanislav

arXiv.org Machine Learning

Delineating areas within metropolitan regions stands as an important focus among urban researchers, shedding light on the urban perimeters shaped by evolving population dynamics. Applications to urban science are numerous, from facilitating comparisons between delineated districts and administrative divisions to informing policymakers of the shifting economic and labor landscapes. In this study, we propose using commute networks sourced from the census for the purpose of urban delineation, by modeling them with a Graph Neural Network (GNN) architecture. We derive low-dimensional representations of granular urban areas (nodes) using GNNs. Subsequently, nodes' embeddings are clustered to identify spatially cohesive communities in urban areas. Our experiments across the U.S. demonstrate the effectiveness of network embeddings in capturing significant socioeconomic disparities between communities in various cities, particularly in factors such as median household income. The role of census mobility data in regional delineation is also noted, and we establish the utility of GNNs in urban community detection, as a powerful alternative to existing methods in this domain. The results offer insights into the wider effects of commute networks and their use in building meaningful representations of urban regions.


Enhancing Epidemic Forecasting: Evaluating the Role of Mobility Data and Graph Convolutional Networks

Guo, Suhan, Xu, Zhenghao, Shen, Furao, Zhao, Jian

arXiv.org Artificial Intelligence

Accurate prediction of contagious disease outbreaks is vital for informed decision-making. Our study addresses the gap between machine learning algorithms and their epidemiological applications, noting that methods optimal for benchmark datasets often underperform with real-world data due to difficulties in incorporating mobility information. We adopt a two-phase approach: first, assessing the significance of mobility data through a pilot study, then evaluating the impact of Graph Convolutional Networks (GCNs) on a transformer backbone. Our findings reveal that while mobility data and GCN modules do not significantly enhance forecasting performance, the inclusion of mortality and hospitalization data markedly improves model accuracy. Additionally, a comparative analysis between GCN-derived spatial maps and lockdown orders suggests a notable correlation, highlighting the potential of spatial maps as sensitive indicators for mobility. Our research offers a novel perspective on mobility representation in predictive modeling for contagious diseases, empowering decision-makers to better prepare for future outbreaks.


Enhancing Large Language Models for Mobility Analytics with Semantic Location Tokenization

Chen, Yile, Tao, Yicheng, Jiang, Yue, Liu, Shuai, Yu, Han, Cong, Gao

arXiv.org Artificial Intelligence

The widespread adoption of location-based services has led to the generation of vast amounts of mobility data, providing significant opportunities to model user movement dynamics within urban environments. Recent advancements have focused on adapting Large Language Models (LLMs) for mobility analytics. However, existing methods face two primary limitations: inadequate semantic representation of locations (i.e., discrete IDs) and insufficient modeling of mobility signals within LLMs (i.e., single templated instruction fine-tuning). To address these issues, we propose QT-Mob, a novel framework that significantly enhances LLMs for mobility analytics. QT-Mob introduces a location tokenization module that learns compact, semantically rich tokens to represent locations, preserving contextual information while ensuring compatibility with LLMs. Furthermore, QT-Mob incorporates a series of complementary fine-tuning objectives that align the learned tokens with the internal representations in LLMs, improving the model's comprehension of sequential movement patterns and location semantics. The proposed QT-Mob framework not only enhances LLMs' ability to interpret mobility data but also provides a more generalizable approach for various mobility analytics tasks. Experiments on three real-world dataset demonstrate the superior performance in both next-location prediction and mobility recovery tasks, outperforming existing deep learning and LLM-based methods.


Identifying and Characterising Higher Order Interactions in Mobility Networks Using Hypergraphs

Sambaturu, Prathyush, Gutierrez, Bernardo, Kraemer, Moritz U. G.

arXiv.org Artificial Intelligence

Human mobility data is crucial for understanding patterns of movement across geographical regions, with applications spanning urban planning[1], transportation systems design[2], infectious disease modeling and control [3, 4], and social dynamics studies [5]. Traditionally, mobility data has been represented using flow networks[6, 7] or colocation matrices [8], where the primary representation is via pairwise interactions. In flow networks, this means directed edges represent the movement of individuals between two locations; colocation matrices measure the probability that a random individual from a region is colocated with a random individual from another region at the same location. These data types and their pairwise representation structure have been used to identify the spatial scales and regularity of human mobility, but have inherent limitations in their capacity to capture more complex patterns of human movement involving higher-order interactions between locations - that is, group of locations that are frequently visited by many individuals within a period of time (e.g., a week) and revisited regularly over time. Higher-order interactions between locations can contain crucial information under certain scenarios.


A Foundational individual Mobility Prediction Model based on Open-Source Large Language Models

Qin, Zhenlin, Wang, Leizhen, Pereira, Francisco Camara, Ma, Zhenlinag

arXiv.org Artificial Intelligence

Large Language Models (LLMs) are widely applied to domain-specific tasks due to their massive general knowledge and remarkable inference capacities. Current studies on LLMs have shown immense potential in applying LLMs to model individual mobility prediction problems. However, most LLM-based mobility prediction models only train on specific datasets or use single well-designed prompts, leading to difficulty in adapting to different cities and users with diverse contexts. To fill these gaps, this paper proposes a unified fine-tuning framework to train a foundational open source LLM-based mobility prediction model. We conducted extensive experiments on six real-world mobility datasets to validate the proposed model. The results showed that the proposed model achieved the best performance in prediction accuracy and transferability over state-of-the-art models based on deep learning and LLMs.


Urban Region Representation Learning: A Flexible Approach

Sun, Fengze, Chang, Yanchuan, Tanin, Egemen, Karunasekera, Shanika, Qi, Jianzhong

arXiv.org Artificial Intelligence

The increasing availability of urban data offers new opportunities for learning region representations, which can be used as input to machine learning models for downstream tasks such as check-in or crime prediction. While existing solutions have produced promising results, an issue is their fixed formation of regions and fixed input region features, which may not suit the needs of different downstream tasks. To address this limitation, we propose a model named FlexiReg for urban region representation learning that is flexible with both the formation of urban regions and the input region features. FlexiReg is based on a spatial grid partitioning over the spatial area of interest. It learns representations for the grid cells, leveraging publicly accessible data, including POI, land use, satellite imagery, and street view imagery. We propose adaptive aggregation to fuse the cell representations and prompt learning techniques to tailor the representations towards different tasks, addressing the needs of varying formations of urban regions and downstream tasks. Extensive experiments on five real-world datasets demonstrate that FlexiReg outperforms state-of-the-art models by up to 202% in term of the accuracy of four diverse downstream tasks using the produced urban region representations.