Chen, Xianjie
Time-based Sequence Model for Personalization and Recommendation Systems
Ishkhanov, Tigran, Naumov, Maxim, Chen, Xianjie, Zhu, Yan, Zhong, Yuan, Azzolini, Alisson Gusatti, Sun, Chonglin, Jiang, Frank, Malevich, Andrey, Xiong, Liang
Recommendation systems play an important role in many e-commerce applications as well as search and ranking services [6, 15, 21, 26, 30, 31, 41, 48]. There are two main strategies to perform recommendations: content and collaborative filtering. In content filtering the user creates a profile based on its interest, while human experts create a profile for the product. An algorithm matches the two profiles and recommends the closest matches to the user. For example, this approach is taken by the Pandora Music Genome Project [29]. In collaborative filtering, the recommendations are based only on user past behavior from which the future behavior is derived. The advantage of this approach is that it requires no external information and is not domain specific. The challenge is that in the beginning very few user-item interactions are available. For instance, this cold start problem is addressed by Netflix by asking the user for a few favorite movies when creating their profile for the first time [27].
Articulated Pose Estimation by a Graphical Model with Image Dependent Pairwise Relations
Chen, Xianjie, Yuille, Alan L.
We present a method for estimating articulated human pose from a single static image based on a graphical model with novel pairwise relations that make adaptive use of local image measurements. More precisely, we specify a graphical model for human pose which exploits the fact the local image measurements can be used both to detect parts (or joints) and also to predict the spatial relationships between them (Image Dependent Pairwise Relations). These spatial relationships are represented by a mixture model. We use Deep Convolutional Neural Networks (DCNNs) to learn conditional probabilities for the presence of parts and their spatial relationships within image patches. Hence our model combines the representational flexibility of graphical models with the efficiency and statistical power of DCNNs. Our method significantly outperforms the state of the art methods on the LSP and FLIC datasets and also performs very well on the Buffy dataset without any training.