Goto

Collaborating Authors

 temporal knowledge


Reinforcement Learning Driven Generalizable Feature Representation for Cross-User Activity Recognition

Ye, Xiaozhou, Wang, Kevin I-Kai

arXiv.org Artificial Intelligence

Human Activity Recognition (HAR) using wearable sensors is crucial for healthcare, fitness tracking, and smart environments, yet cross-user variability -- stemming from diverse motion patterns, sensor placements, and physiological traits -- hampers generalization in real-world settings. Conventional supervised learning methods often overfit to user-specific patterns, leading to poor performance on unseen users. Existing domain generalization approaches, while promising, frequently overlook temporal dependencies or depend on impractical domain-specific labels. We propose Temporal-Preserving Reinforcement Learning Domain Generalization (TPRL-DG), a novel framework that redefines feature extraction as a sequential decision-making process driven by reinforcement learning. TPRL-DG leverages a Transformer-based autoregressive generator to produce temporal tokens that capture user-invariant activity dynamics, optimized via a multi-objective reward function balancing class discrimination and cross-user invariance. Key innovations include: (1) an RL-driven approach for domain generalization, (2) autoregressive tokenization to preserve temporal coherence, and (3) a label-free reward design eliminating the need for target user annotations. Evaluations on the DSADS and PAMAP2 datasets show that TPRL-DG surpasses state-of-the-art methods in cross-user generalization, achieving superior accuracy without per-user calibration. By learning robust, user-invariant temporal patterns, TPRL-DG enables scalable HAR systems, facilitating advancements in personalized healthcare, adaptive fitness tracking, and context-aware environments.


Does Time Have Its Place? Temporal Heads: Where Language Models Recall Time-specific Information

Park, Yein, Yoon, Chanwoong, Park, Jungwoo, Jeong, Minbyul, Kang, Jaewoo

arXiv.org Artificial Intelligence

While the ability of language models to elicit facts has been widely investigated, how they handle temporally changing facts remains underexplored. We discover Temporal Heads, specific attention heads primarily responsible for processing temporal knowledge through circuit analysis. We confirm that these heads are present across multiple models, though their specific locations may vary, and their responses differ depending on the type of knowledge and its corresponding years. Disabling these heads degrades the model's ability to recall time-specific knowledge while maintaining its general capabilities without compromising time-invariant and question-answering performances. Moreover, the heads are activated not only numeric conditions ("In 2004") but also textual aliases ("In the year ..."), indicating that they encode a temporal dimension beyond simple numerical representation. Furthermore, we expand the potential of our findings by demonstrating how temporal knowledge can be edited by adjusting the values of these heads.


Counterfactual-Consistency Prompting for Relative Temporal Understanding in Large Language Models

Kim, Jongho, Hwang, Seung-won

arXiv.org Artificial Intelligence

Despite the advanced capabilities of large language models (LLMs), their temporal reasoning ability remains underdeveloped. Prior works have highlighted this limitation, particularly in maintaining temporal consistency when understanding events. For example, models often confuse mutually exclusive temporal relations like ``before'' and ``after'' between events and make inconsistent predictions. In this work, we tackle the issue of temporal inconsistency in LLMs by proposing a novel counterfactual prompting approach. Our method generates counterfactual questions and enforces collective constraints, enhancing the model's consistency. We evaluate our method on multiple datasets, demonstrating significant improvements in event ordering for explicit and implicit events and temporal commonsense understanding by effectively addressing temporal inconsistencies.


Temporal Knowledge Sharing enable Spiking Neural Network Learning from Past and Future

Dong, Yiting, Zhao, Dongcheng, Zeng, Yi

arXiv.org Artificial Intelligence

Spiking Neural Networks (SNNs) have attracted significant attention from researchers across various domains due to their brain-like information processing mechanism. However, SNNs typically grapple with challenges such as extended time steps, low temporal information utilization, and the requirement for consistent time step between testing and training. These challenges render SNNs with high latency. Moreover, the constraint on time steps necessitates the retraining of the model for new deployments, reducing adaptability. To address these issues, this paper proposes a novel perspective, viewing the SNN as a temporal aggregation model. We introduce the Temporal Knowledge Sharing (TKS) method, facilitating information interact between different time points. TKS can be perceived as a form of temporal self-distillation. To validate the efficacy of TKS in information processing, we tested it on static datasets like CIFAR10, CIFAR100, ImageNet-1k, and neuromorphic datasets such as DVS-CIFAR10 and NCALTECH101. Experimental results demonstrate that our method achieves state-of-the-art performance compared to other algorithms. Furthermore, TKS addresses the temporal consistency challenge, endowing the model with superior temporal generalization capabilities. This allows the network to train with longer time steps and maintain high performance during testing with shorter time steps. Such an approach considerably accelerates the deployment of SNNs on edge devices. Finally, we conducted ablation experiments and tested TKS on fine-grained tasks, with results showcasing TKS's enhanced capability to process information efficiently.


ECOLA: Enhanced Temporal Knowledge Embeddings with Contextualized Language Representations

Han, Zhen, Liao, Ruotong, Gu, Jindong, Zhang, Yao, Ding, Zifeng, Gu, Yujia, Köppl, Heinz, Schütze, Hinrich, Tresp, Volker

arXiv.org Artificial Intelligence

Since conventional knowledge embedding models cannot take full advantage of the abundant textual information, there have been extensive research efforts in enhancing knowledge embedding using texts. However, existing enhancement approaches cannot apply to temporal knowledge graphs (tKGs), which contain time-dependent event knowledge with complex temporal dynamics. Specifically, existing enhancement approaches often assume knowledge embedding is time-independent. In contrast, the entity embedding in tKG models usually evolves, which poses the challenge of aligning temporally relevant texts with entities. To this end, we propose to study enhancing temporal knowledge embedding with textual data in this paper. As an approach to this task, we propose Enhanced Temporal Knowledge Embeddings with Contextualized Language Representations (ECOLA), which takes the temporal aspect into account and injects textual information into temporal knowledge embedding. To evaluate ECOLA, we introduce three new datasets for training and evaluating ECOLA. Extensive experiments show that ECOLA significantly enhances temporal KG embedding models with up to 287% relative improvements regarding Hits@1 on the link prediction task. The code and models are publicly available on https://anonymous.4open.science/r/ECOLA.


Interval Logic Tensor Networks

Badreddine, Samy, Apriceno, Gianluca, Passerini, Andrea, Serafini, Luciano

arXiv.org Artificial Intelligence

Event detection (ED) from sequences of data is a critical challenge in various fields, including surveillance [Clavel et al., 2005], multimedia processing [Xiang and Wang, 2019, Lai, 2022], and social network analysis [Cordeiro and Gama, 2016]. Neural network-based architectures have been developed for ED, leveraging various data types such as text, images, social media data, and audio. Integrating commonsense and structural knowledge about events and their relationships can significantly enhance machine learning methods for ED. For example, in analyzing a soccer match video, the knowledge that a red card shown to a player is typically followed by the player leaving the field can aid in event detection. Additionally, knowledge about how simple events compose complex events is also useful for complex event detection. Background knowledge has been shown to improve the detection of complex events especially when training data is limited [Yin et al., 2020].


Embedding Symbolic Temporal Knowledge into Deep Sequential Models

Xie, Yaqi, Zhou, Fan, Soh, Harold

arXiv.org Artificial Intelligence

Sequences and time-series often arise in robot tasks, e.g., in activity recognition and imitation learning. In recent years, deep neural networks (DNNs) have emerged as an effective data-driven methodology for processing sequences given sufficient training data and compute resources. However, when data is limited, simpler models such as logic/rule-based methods work surprisingly well, especially when relevant prior knowledge is applied in their construction. However, unlike DNNs, these "structured" models can be difficult to extend, and do not work well with raw unstructured data. In this work, we seek to learn flexible DNNs, yet leverage prior temporal knowledge when available. Our approach is to embed symbolic knowledge expressed as linear temporal logic (LTL) and use these embeddings to guide the training of deep models. Specifically, we construct semantic-based embeddings of automata generated from LTL formula via a Graph Neural Network. Experiments show that these learnt embeddings can lead to improvements in downstream robot tasks such as sequential action recognition and imitation learning.


Exploring the Generalizability of Spatio-Temporal Crowd Flow Prediction: Meta-Modeling and an Analytic Framework

Wang, Leye, Chai, Di, Liu, Xuanzhe, Chen, Liyue, Chen, Kai

arXiv.org Artificial Intelligence

The Spatio-Temporal Crowd Flow Prediction (STCFP) problem is a classical problem with plenty of prior research efforts that benefit from traditional statistical learning and recent deep learning approaches. While STCFP can refer to many real-world problems, most existing studies focus on quite specific applications, such as the prediction of taxi demand, ridesharing order, and so on. This hinders the STCFP research as the approaches designed for different applications are hardly comparable, and thus how an applicationdriven approach can be generalized to other scenarios is unclear. To fill in this gap, this paper makes two efforts: (i) we propose an analytic framework, called STAnalytic, to qualitatively investigate STCFP approaches regarding their design considerations on various spatial and temporal factors, aiming to make different application-driven approaches comparable; (ii) we construct an extensively large-scale STCFP benchmark datasets with four different scenarios (including ridesharing, bikesharing, metro, and electrical vehicle charging) with up to hundreds of millions of flow records, to quantitatively measure the generalizability of STCFP approaches. Furthermore, to elaborate the effectiveness of STAnalytic in helping design generalizable STCFP approaches, we propose a spatio-temporal meta-model, called STMeta, by integrating generalizable temporal and spatial knowledge identified by STAnalytic. We implement three variants of STMeta with different deep learning techniques. With the datasets, we demonstrate that STMeta variants can outperform state-of-the-art STCFP approaches by 5%.


Towards The Inductive Acquisition of Temporal Knowledge

Chen, Kaihu

arXiv.org Artificial Intelligence

The ability to predict the future in a given domain can be acquired by discovering empirically from experience certain temporal patterns that tend to repeat unerringly. Previous works in time series analysis allow one to make quantitative predictions on the likely values of certain linear variables. Since certain types of knowledge are better expressed in symbolic forms, making qualitative predictions based on symbolic representations require a different approach. A domain independent methodology called TIM (Time based Inductive Machine) for discovering potentially uncertain temporal patterns from real time observations using the technique of inductive inference is described here.