ImplicitSLIM and How it Improves Embedding-based Collaborative Filtering

Shenbin, Ilya, Nikolenko, Sergey

arXiv.org Artificial Intelligence 

Sparse linear methods (SLIM) and their variations show outstanding performance, but they are memory-intensive and hard to scale. ImplicitSLIM improves embedding-based models by extracting embeddings from SLIM-like models in a computationally cheap and memory-efficient way, without explicit learning of heavy SLIM-like models. We show that ImplicitSLIM improves performance and speeds up convergence for both state of the art and classical collaborative filtering methods. Learnable embeddings are a core part of many collaborative filtering (CF) models. In this work, we propose an approach able to improve a wide variety of collaborative filtering models with learnable embeddings. Item-item methods, including kNN-based approaches (Sarwar et al., 2001) and sparse linear methods (SLIM) (Ning & Karypis, 2011), are making predictions based on item-item similarity. Previous research shows that the item-item weight matrix learned by SLIM-like models can become a part of other collaborative filtering models; e.g., RecWalk uses it as a transition probability matrix (Nikolakopoulos & Karypis, 2019). In this work, we reuse the item-item weight matrix in order to enrich embedding-based models with information on item-item interactions. Another motivation for our approach stems from nonlinear dimensionality reduction methods (e.g., VAEs) applied to collaborative filtering (Shenbin et al., 2020). We consider a group of manifold learning methods that aim to preserve the structure of data in the embedding space, that is, they force embeddings of similar objects to be similar.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found