Kireev, Ivan
ESQA: Event Sequences Question Answering
Abdullaeva, Irina, Filatov, Andrei, Orlov, Mikhail, Karpukhin, Ivan, Vasilev, Viacheslav, Dimitrov, Denis, Kuznetsov, Andrey, Kireev, Ivan, Savchenko, Andrey
Event sequences (ESs) arise in many practical domains including finance, retail, social networks, and healthcare. In the context of machine learning, event sequences can be seen as a special type of tabular data with annotated timestamps. Despite the importance of ESs modeling and analysis, little effort was made in adapting large language models (LLMs) to the ESs domain. In this paper, we highlight the common difficulties of ESs processing and propose a novel solution capable of solving multiple downstream tasks with little or no finetuning. In particular, we solve the problem of working with long sequences and improve time and numeric features processing. The resulting method, called ESQA, effectively utilizes the power of LLMs and, according to extensive experiments, achieves state-of-the-art results in the ESs domain.
Universal representations for financial transactional data: embracing local, global, and external contexts
Bazarova, Alexandra, Kovaleva, Maria, Kuleshov, Ilya, Romanenkova, Evgenia, Stepikin, Alexander, Yugay, Alexandr, Mollaev, Dzhambulat, Kireev, Ivan, Savchenko, Andrey, Zaytsev, Alexey
Effective processing of financial transactions is essential for banking data analysis. However, in this domain, most methods focus on specialized solutions to stand-alone problems instead of constructing universal representations suitable for many problems. We present a representation learning framework that addresses diverse business challenges. We also suggest novel generative models that account for data specifics, and a way to integrate external information into a client's representation, leveraging insights from other customers' actions. Finally, we offer a benchmark, describing representation quality globally, concerning the entire transaction history; locally, reflecting the client's current state; and dynamically, capturing representation evolution over time. Our generative approach demonstrates superior performance in local tasks, with an increase in ROC-AUC of up to 14\% for the next MCC prediction task and up to 46\% for downstream tasks from existing contrastive baselines. Incorporating external information improves the scores by an additional 20\%.
Continuous-time convolutions model of event sequences
Zhuzhel, Vladislav, Grabar, Vsevolod, Boeva, Galina, Zabolotnyi, Artem, Stepikin, Alexander, Zholobov, Vladimir, Ivanova, Maria, Orlov, Mikhail, Kireev, Ivan, Burnaev, Evgeny, Rivera-Castro, Rodrigo, Zaytsev, Alexey
Massive samples of event sequences data occur in various domains, including e-commerce, healthcare, and finance. There are two main challenges regarding inference of such data: computational and methodological. The amount of available data and the length of event sequences per client are typically large, thus it requires long-term modelling. Moreover, this data is often sparse and non-uniform, making classic approaches for time series processing inapplicable. Existing solutions include recurrent and transformer architectures in such cases. To allow continuous time, the authors introduce specific parametric intensity functions defined at each moment on top of existing models. Due to the parametric nature, these intensities represent only a limited class of event sequences. We propose the COTIC method based on a continuous convolution neural network suitable for non-uniform occurrence of events in time. In COTIC, dilations and multi-layer architecture efficiently handle dependencies between events. Furthermore, the model provides general intensity dynamics in continuous time - including self-excitement encountered in practice. The COTIC model outperforms existing approaches on majority of the considered datasets, producing embeddings for an event sequence that can be used to solve downstream tasks - e.g. predicting next event type and return time. The code of the proposed method can be found in the GitHub repository (https://github.com/VladislavZh/COTIC).