Goto

Collaborating Authors

 adv


Deep Autocorrelation Modeling for Time-Series Forecasting: Progress and Prospects

Wang, Hao, Pan, Licheng, Wen, Qingsong, Yu, Jialin, Chen, Zhichao, Zheng, Chunyuan, Li, Xiaoxi, Chu, Zhixuan, Xu, Chao, Gong, Mingming, Li, Haoxuan, Lu, Yuan, Lin, Zhouchen, Torr, Philip, Liu, Yan

arXiv.org Machine Learning

Autocorrelation is a defining characteristic of time-series data, where each observation is statistically dependent on its predecessors. In the context of deep time-series forecasting, autocorrelation arises in both the input history and the label sequences, presenting two central research challenges: (1) designing neural architectures that model autocorrelation in history sequences, and (2) devising learning objectives that model autocorrelation in label sequences. Recent studies have made strides in tackling these challenges, but a systematic survey examining both aspects remains lacking. To bridge this gap, this paper provides a comprehensive review of deep time-series forecasting from the perspective of autocorrelation modeling. In contrast to existing surveys, this work makes two distinctive contributions. First, it proposes a novel taxonomy that encompasses recent literature on both model architectures and learning objectives -- whereas prior surveys neglect or inadequately discuss the latter aspect. Second, it offers a thorough analysis of the motivations, insights, and progression of the surveyed literature from a unified, autocorrelation-centric perspective, providing a holistic overview of the evolution of deep time-series forecasting. The full list of papers and resources is available at https://github.com/Master-PLC/Awesome-TSF-Papers.


DataStealing

Neural Information Processing Systems

Federated Learning (FL) iscommonly used tocollaborativelytrain models with privacypreservation. Specifically,AdaSCP evaluates the importance of parameters with the gradients in dominant timesteps of the diffusion model.



A Appendix

Neural Information Processing Systems

A.1 TPPE Method We present the pseudo code for TPPE in this paper, using the Insertion mode as an example. According to Alg. 1, we reduce the query time complexity from In our study, we assume the worst-case scenario of applying punctuation-level attacks. Softmax layer is adopted to predict the label of the input text. Paraphrase (TPPEP) to achieve a single-shot attack. We describe the TPPEP method as being decomposed into two parts: training and searching.




An adaptive nearest neighbor rule for classification

Akshay Balsubramani, Sanjoy Dasgupta, yoav Freund, Shay Moran

Neural Information Processing Systems

Findthesmallest0 (n, k, ), where (n, k, )= c1 r logn+ log ( 1/ ) k . Then, withprobabilityatleast1 , theresultingclassifiergn satisfiesthefollowing: foreverypointx 2 supp(µ), if n C adv (x) max log 1 adv (x) , log 1 thengn(x)= g (x).