Self-Supervised Contrastive Pre-Training for Multivariate Point Processes
Shou, Xiao, Subramanian, Dharmashankar, Bhattacharjya, Debarun, Gao, Tian, Bennet, Kristin P.
–arXiv.org Artificial Intelligence
Self-supervision is one of the hallmarks of representation learning in the increasingly popular suite of foundation models including large language models such as BERT and GPT-3, but it has not been pursued in the context of multivariate event streams, to the best of our knowledge. We introduce a new paradigm for self-supervised learning for multivariate point processes using a transformer encoder. Specifically, we design a novel pre-training strategy for the encoder where we not only mask random event epochs but also insert randomly sampled "void" epochs where an event does not occur; this differs from the typical discrete-time pretext tasks such as word-masking in BERT but expands the effectiveness of masking to better capture continuous-time dynamics. To improve downstream tasks, we introduce a contrasting module that compares real events to simulated void instances. The pre-trained model can subsequently be fine-tuned on a potentially much smaller event dataset, similar conceptually to the typical transfer of popular pre-trained language models. We demonstrate the effectiveness of our proposed paradigm on the next-event prediction task using synthetic datasets and 3 real applications, observing a relative performance boost of as high as up to 20% compared to state-of-the-art models.
arXiv.org Artificial Intelligence
Feb-1-2024
- Country:
- North America > United States > New York (0.14)
- Genre:
- Research Report (1.00)
- Industry:
- Information Technology (0.46)
- Technology: