Not enough data to create a plot.
Try a different view from the menu above.
Yang, Hongxia
Variational Auto-encoder for Recommender Systems with Exploration-Exploitation
Zhang, Yizi, Yang, Hongxia, Liu, Meimei
Variational auto-encoder (VAE) is an efficient non-linear latent factor model that has been widely applied in recommender systems (RS). However, a drawback of VAE for RS is their inability of exploration. A good RS is expected to recommend items that are known to enjoy and items that are novel to try. In this work, we introduce an exploitation-exploration motivated VAE (XploVAE) to collaborative filtering. To facilitate personalized recommendations, we construct user-specific subgraphs, which contain the first-order proximity capturing observed user-item interactions for exploitation and the higher-order proximity for exploration. We further develop a hierarchical latent space model to learn the population distribution of the user subgraphs, and learn the personalized item embedding. Empirical experiments prove the effectiveness of our proposed method on various real-world data sets.
Learning Disentangled Representations for Recommendation
Ma, Jianxin, Zhou, Chang, Cui, Peng, Yang, Hongxia, Zhu, Wenwu
User behavior data in recommender systems are driven by the complex interactions of many latent factors behind the users' decision making processes. The factors are highly entangled, and may range from high-level ones that govern user intentions, to low-level ones that characterize a user's preference when executing an intention. Learning representations that uncover and disentangle these latent factors can bring enhanced robustness, interpretability, and controllability. However, learning such disentangled representations from user behavior is challenging, and remains largely neglected by the existing literature. In this paper, we present the MACRo-mIcro Disentangled Variational Auto-Encoder (MacridVAE) for learning disentangled representations from user behavior.
Bayes EMbedding (BEM): Refining Representation by Integrating Knowledge Graphs and Behavior-specific Networks
Ye, Yuting, Wang, Xuwu, Yao, Jiangchao, Jia, Kunyang, Zhou, Jingren, Xiao, Yanghua, Yang, Hongxia
Low-dimensional embeddings of knowledge graphs and behavior graphs have proved remarkably powerful in varieties of tasks, from predicting unobserved edges between entities to content recommendation. The two types of graphs can contain distinct and complementary information for the same entities/nodes. However, previous works focus either on knowledge graph embedding or behavior graph embedding while few works consider both in a unified way. Here we present BEM , a Bayesian framework that incorporates the information from knowledge graphs and behavior graphs. To be more specific, BEM takes as prior the pre-trained embeddings from the knowledge graph, and integrates them with the pre-trained embeddings from the behavior graphs via a Bayesian generative model. BEM is able to mutually refine the embeddings from both sides while preserving their own topological structures. To show the superiority of our method, we conduct a range of experiments on three benchmark datasets: node classification, link prediction, triplet classification on two small datasets related to Freebase, and item recommendation on a large-scale e-commerce dataset.
Dimensional Reweighting Graph Convolutional Networks
Zou, Xu, Jia, Qiuye, Zhang, Jianwei, Zhou, Chang, Yang, Hongxia, Tang, Jie
Graph Convolution Networks (GCNs) are becoming more and more popular for learning node representations on graphs. Though there exist various developments on sampling and aggregation to accelerate the training process and improve the performances, limited works focus on dealing with the dimensional information imbalance of node representations. To bridge the gap, we propose a method named Dimensional reweighting Graph Convolution Network (DrGCN). We theoretically prove that our DrGCN can guarantee to improve the stability of GCNs via mean field theory. Our dimensional reweighting method is very flexible and can be easily combined with most sampling and aggregation techniques for GCNs. Experimental results demonstrate its superior performances on several challenging transductive and inductive node classification benchmark datasets. Our DrGCN also outperforms existing models on an industrial-sized Alibaba recommendation dataset.
Cognitive Knowledge Graph Reasoning for One-shot Relational Learning
Du, Zhengxiao, Zhou, Chang, Ding, Ming, Yang, Hongxia, Tang, Jie
Inferring new facts from existing knowledge graphs (KG) with explainable reasoning processes is a significant problem and has received much attention recently. However, few studies have focused on relation types unseen in the original KG, given only one or a few instances for training. To bridge this gap, we propose CogKR for one-shot KG reasoning. The one-shot relational learning problem is tackled through two modules: the summary module summarizes the underlying relationship of the given instances, based on which the reasoning module infers the correct answers. Motivated by the dual process theory in cognitive science, in the reasoning module, a cognitive graph is built by iteratively coordinating retrieval (System 1, collecting relevant evidence intuitively) and reasoning (System 2, conducting relational reasoning over collected information). The structural information offered by the cognitive graph enables our model to aggregate pieces of evidence from multiple reasoning paths and explain the reasoning process graphically. Experiments show that CogKR substantially outperforms previous state-of-the-art models on one-shot KG reasoning benchmarks, with relative improvements of 24.3%-29.7% on MRR. The source code is available at https://github.com/THUDM/CogKR.
Sequential Scenario-Specific Meta Learner for Online Recommendation
Du, Zhengxiao, Wang, Xiaowei, Yang, Hongxia, Zhou, Jingren, Tang, Jie
Cold-start problems are long-standing challenges for practical recommendations. Most existing recommendation algorithms rely on extensive observed data and are brittle to recommendation scenarios with few interactions. This paper addresses such problems using few-shot learning and meta learning. Our approach is based on the insight that having a good generalization from a few examples relies on both a generic model initialization and an effective strategy for adapting this model to newly arising tasks. To accomplish this, we combine the scenario-specific learning with a model-agnostic sequential meta-learning and unify them into an integrated end-to-end framework, namely Scenario-specific Sequential Meta learner (or s^2 meta). By doing so, our meta-learner produces a generic initial model through aggregating contextual information from a variety of prediction tasks while effectively adapting to specific tasks by leveraging learning-to-learn knowledge. Extensive experiments on various real-world datasets demonstrate that our proposed model can achieve significant gains over the state-of-the-arts for cold-start problems in online recommendation. Deployment is at the Guess You Like session, the front page of the Mobile Taobao.