Kothari, Mohit
From Features to Transformers: Redefining Ranking for Scalable Impact
Borisyuk, Fedor, Hertel, Lars, Parameswaran, Ganesh, Srivastava, Gaurav, Ramanujam, Sudarshan Srinivasa, Ocejo, Borja, Du, Peng, Akterskii, Andrei, Daftary, Neil, Tang, Shao, Sun, Daqi, Xiao, Qiang Charles, Nathani, Deepesh, Kothari, Mohit, Dai, Yun, Gupta, Aman
We present LiGR, a large-scale ranking framework developed at LinkedIn that brings state-of-the-art transformer-based modeling architectures into production. We introduce a modified transformer architecture that incorporates learned normalization and simultaneous set-wise attention to user history and ranked items. This architecture enables several breakthrough achievements, including: (1) the deprecation of most manually designed feature engineering, outperforming the prior state-of-the-art system using only few features (compared to hundreds in the baseline), (2) validation of the scaling law for ranking systems, showing improved performance with larger models, more training data, and longer context sequences, and (3) simultaneous joint scoring of items in a set-wise manner, leading to automated improvements in diversity. To enable efficient serving of large ranking models, we describe techniques to scale inference effectively using single-pass processing of user history and set-wise attention. We also summarize key insights from various ablation studies and A/B tests, highlighting the most impactful technical approaches.
LiRank: Industrial Large Scale Ranking Models at LinkedIn
Borisyuk, Fedor, Zhou, Mingzhou, Song, Qingquan, Zhu, Siyu, Tiwana, Birjodh, Parameswaran, Ganesh, Dangi, Siddharth, Hertel, Lars, Xiao, Qiang, Hou, Xiaochen, Ouyang, Yunbo, Gupta, Aman, Singh, Sheallika, Liu, Dan, Cheng, Hailing, Le, Lei, Hung, Jonathan, Keerthi, Sathiya, Wang, Ruoyan, Zhang, Fengyu, Kothari, Mohit, Zhu, Chen, Sun, Daqi, Dai, Yun, Luan, Xun, Zhu, Sirou, Wang, Zhiwei, Daftary, Neil, Shen, Qianqi, Jiang, Chengming, Wei, Haichao, Varshney, Maneesh, Ghoting, Amol, Ghosh, Souvik
We present LiRank, a large-scale ranking framework at LinkedIn that brings to production state-of-the-art modeling architectures and optimization methods. We unveil several modeling improvements, including Residual DCN, which adds attention and residual connections to the famous DCNv2 architecture. We share insights into combining and tuning SOTA architectures to create a unified model, including Dense Gating, Transformers and Residual DCN. We also propose novel techniques for calibration and describe how we productionalized deep learning based explore/exploit methods. To enable effective, production-grade serving of large ranking models, we detail how to train and compress models using quantization and vocabulary compression. We provide details about the deployment setup for large-scale use cases of Feed ranking, Jobs Recommendations, and Ads click-through rate (CTR) prediction. We summarize our learnings from various A/B tests by elucidating the most effective technical approaches. These ideas have contributed to relative metrics improvements across the board at LinkedIn: +0.5% member sessions in the Feed, +1.76% qualified job applications for Jobs search and recommendations, and +4.3% for Ads CTR. We hope this work can provide practical insights and solutions for practitioners interested in leveraging large-scale deep ranking systems.