STaRFormer: Semi-Supervised Task-Informed Representation Learning via Dynamic Attention-Based Regional Masking for Sequential Data
Forstenhäusler, Maximilian, Külzer, Daniel, Anagnostopoulos, Christos, Parambath, Shameem Puthiya, Weber, Natascha
–arXiv.org Artificial Intelligence
Understanding user intent is essential for situational and context-aware decision-making. Motivated by a real-world scenario, this work addresses intent predictions of smart device users in the vicinity of vehicles by modeling sequential spatiotemporal data. However, in real-world scenarios, environmental factors and sensor limitations can result in non-stationary and irregularly sampled data, posing significant challenges. To address these issues, we propose STaRFormer, a Transformer-based approach that can serve as a universal framework for sequential modeling. STaRFormer utilizes a new dynamic attention-based regional masking scheme combined with a novel semi-supervised contrastive learning paradigm to enhance task-specific latent representations. Comprehensive experiments on 56 datasets varying in types (including non-stationary and irregularly sampled), tasks, domains, sequence lengths, training samples, and applications demonstrate the efficacy of STaRFormer, achieving notable improvements over state-of-the-art approaches.
arXiv.org Artificial Intelligence
Dec-2-2025
- Country:
- Asia
- Europe (0.28)
- North America > United States
- District of Columbia > Washington (0.04)
- Georgia > Fulton County
- Atlanta (0.04)
- Oceania > Australia (0.04)
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (1.00)
- Promising Solution (0.65)
- Research Report
- Industry:
- Automobiles & Trucks (0.67)
- Government (0.67)
- Health & Medicine > Therapeutic Area (0.93)
- Information Technology (1.00)
- Technology: