DyWPE: Signal-Aware Dynamic Wavelet Positional Encoding for Time Series Transformers
Irani, Habib, Metsis, Vangelis
–arXiv.org Artificial Intelligence
A fundamental component enabling transformers to process sequential data is positional encoding, which addresses the inherent permutation invariance of self-attention mechanisms by injecting positional information into input representations. In time series analysis, the importance of positional encoding is amplified due to the intrinsic temporal dependencies and complex multi-scale patterns characteristic of temporal data [2, 3]. However, existing positional encoding methods, ranging from sinusoidal encodings [1] to sophisticated relative positioning schemes [4, 5], share a fundamental limitation: they are signal-agnostic. These methods derive positional information exclusively from abstract sequence indices (0, 1, ..., L-1) while remaining completely oblivious to the underlying signal characteristics. For instance, consider two time series segments occurring at identical absolute positions but exhibiting vastly different temporal dynamics: one representing a quiet, stable period with minimal variation, and another capturing volatile, high-frequency oscillations.
arXiv.org Artificial Intelligence
Sep-19-2025
- Country:
- North America > United States > Texas > Hays County > San Marcos (0.04)
- Genre:
- Research Report (0.40)
- Technology: