PDT: Pretrained Dual Transformers for Time-aware Bipartite Graphs
Dai, Xin, Fan, Yujie, Zhuang, Zhongfang, Jain, Shubham, Yeh, Chin-Chia Michael, Wang, Junpeng, Wang, Liang, Zheng, Yan, Aboagye, Prince Osei, Zhang, Wei
–arXiv.org Artificial Intelligence
Pre-training on large models is prevalent and emerging Fundamentally, a common goal of data mining applications with the ever-growing user-generated content in many using user-content interactions is to understand machine learning application categories. It has been user's behaviors [17] and content's properties. Researchers recognized that learning contextual knowledge from attempt multiple ways to model such behaviors: the datasets depicting user-content interaction plays The time-related nature of the interactions is a fit for a vital role in downstream tasks. Despite several sequential models, such as recurrent neural networks studies attempting to learn contextual knowledge via pretraining (RNN), and the interactions and relations can be modeled methods, finding an optimal training objective as graph neural networks (GNN). Conventionally, and strategy for this type of task remains a challenging the training objective is to minimize the loss of a specific problem. In this work, we contend that there are two task such that one model is tailored to a particular distinct aspects of contextual knowledge, namely the application (e.g., recommendation). This approach is user-side and the content-side, for datasets where usercontent simple and effective for every data mining application.
arXiv.org Artificial Intelligence
Sep-25-2023
- Country:
- North America > United States (0.15)
- Genre:
- Research Report (1.00)
- Industry:
- Leisure & Entertainment (0.95)
- Media > Film (1.00)
- Technology: