Goto

Collaborating Authors

 advinfonce


A Related Work

Neural Information Processing Systems

The latest CL-based CF methods can roughly fall into two research lines. The second category, referred to as "loss-based" approaches, The prevailing augmentation-based paradigm in CL-based CF methods is to employ user-item bipartite graph augmentations to generate contrasting views. Despite the remarkable success of CL-based CF methods, there remains a lack of theoretical understanding, particularly regarding the superior generalization ability of contrastive loss. B.4 Align T op-K evaluation metric Discounted Cumulative Gain (DCG) is a commonly used ranking metric in top-K recommendation In DCG, the relevance of an item's contribution to the utility decreases logarithmically in relation to its position in the ranked list. The training set is comprised of 311,704 user-selected ratings ranging from 1 to 5. The test set includes ratings for ten songs randomly exposed to each user.



Empowering Collaborative Filtering with Principled Adversarial Contrastive Loss

Neural Information Processing Systems

Contrastive Learning (CL) has achieved impressive performance in self-supervised learning tasks, showing superior generalization ability. Inspired by the success, adopting CL into collaborative filtering (CF) is prevailing in semi-supervised topK recommendations. The basic idea is to routinely conduct heuristic-based data augmentation and apply contrastive losses (e.g., InfoNCE) on the augmented views. Yet, some CF-tailored challenges make this adoption suboptimal, such as the issue of out-of-distribution, the risk of false negatives, and the nature of top-K evaluation. They necessitate the CL-based CF scheme to focus more on mining hard negatives and distinguishing false negatives from the vast unlabeled user-item interactions, for informative contrast signals. Worse still, there is limited understanding of contrastive loss in CF methods, especially w.r.t.


A Related Work

Neural Information Processing Systems

The latest CL-based CF methods can roughly fall into two research lines. The second category, referred to as "loss-based" approaches, The prevailing augmentation-based paradigm in CL-based CF methods is to employ user-item bipartite graph augmentations to generate contrasting views. Despite the remarkable success of CL-based CF methods, there remains a lack of theoretical understanding, particularly regarding the superior generalization ability of contrastive loss. B.4 Align T op-K evaluation metric Discounted Cumulative Gain (DCG) is a commonly used ranking metric in top-K recommendation In DCG, the relevance of an item's contribution to the utility decreases logarithmically in relation to its position in the ranked list. The training set is comprised of 311,704 user-selected ratings ranging from 1 to 5. The test set includes ratings for ten songs randomly exposed to each user.



PSL: Rethinking and Improving Softmax Loss from Pairwise Perspective for Recommendation

Yang, Weiqin, Chen, Jiawei, Xin, Xin, Zhou, Sheng, Hu, Binbin, Feng, Yan, Chen, Chun, Wang, Can

arXiv.org Artificial Intelligence

Softmax Loss (SL) is widely applied in recommender systems (RS) and has demonstrated effectiveness. This work analyzes SL from a pairwise perspective, revealing two significant limitations: 1) the relationship between SL and conventional ranking metrics like DCG is not sufficiently tight; 2) SL is highly sensitive to false negative instances. Our analysis indicates that these limitations are primarily due to the use of the exponential function. To address these issues, this work extends SL to a new family of loss functions, termed Pairwise Softmax Loss (PSL), which replaces the exponential function in SL with other appropriate activation functions. While the revision is minimal, we highlight three merits of PSL: 1) it serves as a tighter surrogate for DCG with suitable activation functions; 2) it better balances data contributions; and 3) it acts as a specific BPR loss enhanced by Distributionally Robust Optimization (DRO).


Empowering Collaborative Filtering with Principled Adversarial Contrastive Loss

Neural Information Processing Systems

Contrastive Learning (CL) has achieved impressive performance in self-supervised learning tasks, showing superior generalization ability. Inspired by the success, adopting CL into collaborative filtering (CF) is prevailing in semi-supervised topK recommendations. The basic idea is to routinely conduct heuristic-based data augmentation and apply contrastive losses (e.g., InfoNCE) on the augmented views. Yet, some CF-tailored challenges make this adoption suboptimal, such as the issue of out-of-distribution, the risk of false negatives, and the nature of top-K evaluation. They necessitate the CL-based CF scheme to focus more on mining hard negatives and distinguishing false negatives from the vast unlabeled user-item interactions, for informative contrast signals.