Generative Pre-trained Ranking Model with Over-parameterization at Web-Scale (Extended Abstract)
Li, Yuchen, Xiong, Haoyi, Kong, Linghe, Bian, Jiang, Wang, Shuaiqiang, Chen, Guihai, Yin, Dawei
–arXiv.org Artificial Intelligence
Learning to rank (LTR) is widely employed in web The optimization of the user experience, achieved by catering searches to prioritize pertinent webpages from retrieved to information needs, largely depends on the effective content based on input queries. However, sorting of retrieved content. In this realm, Learning to Rank traditional LTR models encounter two principal obstacles (LTR) becomes instrumental, requiring a considerable amount that lead to suboptimal performance: (1) the of query-webpage pairings with relevancy scores for effective lack of well-annotated query-webpage pairs with supervised LTR [Li et al., 2023b; Qin and Liu, 2013; ranking scores covering a diverse range of search Li et al., 2023c; Lyu et al., 2020; Peng et al., 2024; query popularities, which hampers their ability to Wang et al., 2024b]. Nevertheless, the commonplace scarcity address queries across the popularity spectrum, and of well-described, query-webpage pairings often compels (2) inadequately trained models that fail to induce semi-supervised LTR, harnessing both labeled and unlabeled generalized representations for LTR, resulting in samples for the process [Szummer and Yilmaz, 2011; overfitting. To address these challenges, we propose Zhang et al., 2016; Zhu et al., 2023; Peng et al., 2023].
arXiv.org Artificial Intelligence
Sep-24-2024