State University of New York at Buffalo
Large Scale Constrained Linear Regression Revisited: Faster Algorithms via Preconditioning
Wang, Di (State University of New York at Buffalo) | Xu, Jinhui (State University of New York at Buffalo)
In this paper, we revisit the large-scale constrained linear regression problem and propose faster methods based on some recent developments in sketching and optimization. Our algorithms combine (accelerated) mini-batch SGD with a new method called two-step preconditioning to achieve an approximate solution with a time complexity lower than that of the state-of-the-art techniques for the low precision case. Our idea can also be extended to the high precision case, which gives an alternative implementation to the Iterative Hessian Sketch (IHS) method with significantly improved time complexity. Experiments on benchmark and synthetic datasets suggest that our methods indeed outperform existing ones considerably in both the low and high precision cases.
Let Your Photos Talk: Generating Narrative Paragraph for Photo Stream via Bidirectional Attention Recurrent Neural Networks
Liu, Yu (State University of New York at Buffalo) | Fu, Jianlong (Microsoft Resarch Asia) | Mei, Tao (Microsoft Research Asia) | Chen, Chang Wen (State University of New York at Buffalo)
Automatic generation of natural language description for individual images (a.k.a. image captioning) has attracted extensive research attention. In this paper, we take one step further to investigate the generation of a paragraph to describe a photo stream for the purpose of storytelling. This task is even more challenging than individual image description due to the difficulty in modeling the large visual variance in an ordered photo collection and in preserving the long-term language coherence among multiple sentences. To deal with these challenges, we formulate the task as a sequence-to-sequence learning problem and propose a novel joint learning model by leveraging the semantic coherence in a photo stream. Specifically, to reduce visual variance, we learn a semantic space by jointly embedding each photo with its corresponding contextual sentence, so that the semantically related photos and their correlations are discovered. Then, to preserve language coherence in the paragraph, we learn a novel Bidirectional Attention-based Recurrent Neural Network (BARNN) model, which can attend on the discovered semantic relation to produce a sentence sequence and maintain its consistence with the photo stream. We integrate the two-step learning components into one single optimization formulation and train the network in an end-to-end manner. Experiments on three widely-used datasets (NYC/Disney/SIND) show that the proposed approach outperforms state-of-the-art methods with large margins for both retrieval and paragraph generation tasks. We also show the subjective preference of the machine-generated stories by the proposed approach over the baselines through a user study with 40 human subjects.
Latent Discriminant Analysis with Representative Feature Discovery
Chen, Gang (State University of New York at Buffalo)
Linear Discriminant Analysis (LDA) is a well-known method for dimension reduction and classification with focus on discriminative feature selection. However, how to discover discriminative as well as representative features in LDA model has not been explored. In this paper, we propose a latent Fisher discriminant model with representative feature discovery in an semi-supervised manner. Specifically, our model leverages advantages of both discriminative and generative models by generalizing LDA with data-driven prior over the latent variables. Thus, our method combines multi-class, latent variables and dimension reduction in an unified Bayesian framework. We test our method on MUSK and Corel datasets and yield competitive results compared to baselines. We also demonstrate its capacity on the challenging TRECVID MED11 dataset for semantic keyframe extraction and conduct a human-factors ranking-based experimental evaluation, which clearly demonstrates our proposed method consistently extracts more semantically meaningful keyframes than challenging baselines.
Novel Geometric Approach for Global Alignment of PPI Networks
Liu, Yangwei (State University of New York at Buffalo) | Ding, Hu (Michigan State University) | Chen, Danyang (State University of New York at Buffalo) | Xu, Jinhui (State University of New York at Buffalo)
In this paper we present a novel geometric method for the problem of global pairwise alignment of protein-protein interaction (PPI) networks. A PPI network can be viewed as a node-edge graph and its alignment often needs to solve some generalized version of the subgraph isomorphism problem which is notoriously challenging and NP-hard. All existing research has focused on designing algorithms with good practical performance. In this paper we propose a two-step algorithm for the global pairwise PPI network alignment which consists of a Geometric Step and an MCMF Step. Our algorithm first applies a graph embedding technique that preserves the topological structure of the original PPI networks and maps the problem from graph domain to geometric domain, and computes a rigid transformation for one of the embedded PPI networks so as to minimize its Earth Mover's Distance (EMD) to the other PPI network. It then solves a Min-Cost Max-Flow problem using the (scaled) inverse of sequence similarity scores as edge weight. By using the flow values from the two steps (i.e., EMD and Min-Cost Max-Flow) as the matching scores, we are able to combine the two matching results to obtain the desired alignment. Unlike other popular alignment algorithms which are either greedy or incremental, our algorithm globally optimizes the problem to yield an alignment with better quality.
Maximum Margin Dirichlet Process Mixtures for Clustering
Chen, Gang (State University of New York at Buffalo) | Zhang, Haiying (Chinese Academy of Sciences) | Xiong, Caiming (Metamind Inc.)
The Dirichlet process mixtures (DPM) can automatically infer the model complexity from data. Hence it has attracted significant attention recently, and is widely used for model selection and clustering. As a generative model, it generally requires prior base distribution to learn component parameters by maximizing posterior probability. In contrast, discriminative classifiers model the conditional probability directly, and have yielded better results than generative classifiers.In this paper, we propose a maximum margin Dirichlet process mixture for clustering, which is different from the traditional DPM for parameter modeling. Our model takes a discriminative clustering approach, by maximizing a conditional likelihood to estimate parameters. In particular, we take a EM-like algorithm by leveraging Gibbs sampling algorithm for inference, which in turn can be perfectly embedded in the online maximum margin learning procedure to update model parameters. We test our model and show comparative results over the traditional DPM and other nonparametric clustering approaches.
Random Gradient Descent Tree: A Combinatorial Approach for SVM with Outliers
Ding, Hu (State University of New York at Buffalo) | Xu, Jinhui (State University of New York at Buffalo)
Support Vector Machine (SVM) is a fundamental technique in machine learning. A long time challenge facing SVM is how to deal with outliers (caused by mislabeling), as they could make the classes in SVM nonseparable. Existing techniques, such as soft margin SVM, ฮฝ-SVM, and Core-SVM, can alleviate the problem to certain extent, but cannot completely resolve the issue. Recently, there are also techniques available for explicit outlier removal. But they suffer from high time complexity and cannot guarantee quality of solution. In this paper, we present a new combinatorial approach, called Random Gradient Descent Tree (or RGD-tree), to explicitly deal with outliers; this results in a new algorithm called RGD-SVM. Our technique yields provably good solution and can be efficiently implemented for practical purpose. The time and space complexities of our approach only linearly depend on the input size and the dimensionality of the space, which are significantly better than existing ones. Experiments on benchmark datasets suggest that our technique considerably outperforms several popular techniques in most of the cases.
Clustering-Based Collaborative Filtering for Link Prediction
Wang, Xiangyu (State University of New York at Buffalo) | He, Dayu (State University of New York at Buffalo) | Chen, Danyang (State University of New York at Buffalo) | Xu, Jinhui (State University of New York at Buffalo)
In this paper, we propose a novel collaborative filtering approach for predicting the unobserved links in a network (or graph) with both topological and node features. Our approach improves the well-known compressed sensing based matrix completion method by introducing a new multiple-independent-Bernoulli-distribution model as the data sampling mask. It makes better link predictions since the model is more general and better matches the data distributions in many real-world networks, such as social networks like Facebook. As a result, a satisfying stability of the prediction can be guaranteed. To obtain an accurate multiple-independent-Bernoulli-distribution model of the topological feature space, our approach adjusts the sampling of the adjacency matrix of the network (or graph) using the clustering information in the node feature space. This yields a better performance than those methods which simply combine the two types of features. Experimental results on several benchmark datasets suggest that our approach outperforms the best existing link prediction methods.
Jointly Modeling Deep Video and Compositional Text to Bridge Vision and Language in a Unified Framework
Xu, Ran (State University of New York at Buffalo) | Xiong, Caiming ( University of California, Los Angeles ) | Chen, Wei (State University of New York at Buffalo) | Corso, Jason J (University of Michagan)
Recently, joint video-language modeling has been attracting more and more attention. However, most existing approaches focus on exploring the language model upon on a fixed visual model. In this paper, we propose a unified framework that jointly models video and the corresponding text sentences. The framework consists of three parts: a compositional semantics language model, a deep video model and a joint embedding model. In our language model, we propose a dependency-tree structure model that embeds sentence into a continuous vector space, which preserves visually grounded meanings and word order. In the visual model, we leverage deep neural networks to capture essential semantic information from videos. In the joint embedding model, we minimize the distance of the outputs of the deep video model and compositional language model in the joint space, and update these two models jointly. Based on these three parts, our system is able to accomplish three tasks: 1) natural language generation, and 2) video retrieval and 3) language retrieval. In the experiments, the results show our approach outperforms SVM, CRF and CCA baselines in predicting Subject-Verb- Object triplet and natural sentence generation, and is better than CCA in video retrieval and language retrieval tasks.
Latent Domains Modeling for Visual Domain Adaptation
Xiong, Caiming (State University of New York at Buffalo) | McCloskey, Scott (Honeywell ACS) | Hsieh, Shao-Hang (State University of New York at Buffalo) | Corso, Jason J. (State University of New York at Buffalo)
To improve robustness to significant mismatches between source domain and target domain - arising from changes such as illumination, pose and image quality - domain adaptation is increasingly popular in computer vision. But most of methods assume that the source data is from single domain, or that multi-domain datasets provide the domain label for training instances. In practice, most datasets are mixtures of multiple latent domains, and difficult to manually provide the domain label of each data point. In this paper, we propose a model that automatically discovers latent domains in visual datasets. We first assume the visual images are sampled from multiple manifolds, each of which represents different domain, and which are represented by different subspaces. Using the neighborhood structure estimated from images belonging to the same category, we approximate the local linear invariant subspace for each image based on its local structure, eliminating the category-specific elements of the feature. Based on the effectiveness of this representation, we then propose a squared-loss mutual information based clustering model with category distribution prior in each domain to infer the domain assignment for images. In experiment, we test our approach on two common image datasets, the results show that our method outperforms the existing state-of-the-art methods, and also show the superiority of multiple latent domain discovery.
Finding Median Point-Set Using Earth Mover's Distance
Ding, Hu (State University of New York at Buffalo) | Xu, Jinhui (State University of New York at Buffalo)
In this paper, we study a prototype learning problem, called Median Point-Set, whose objective is to construct a prototype for a set of given point-sets so as to minimize the total Earth Mover's Distances (EMD) between the prototype and the point-sets, where EMD between two point-sets is measured under affine transformation. For this problem, we present the first purely geometric approach. Comparing to existing graph-based approaches (e.g., median graph, shock graph), our approach has several unique advantages: (1) No encoding and decoding procedures are needed to map between objects and graphs, and therefore avoid errors caused by information losing during the mappings; (2) Staying only in the geometric domain makes our approach computationally more efficient and robust to noise. We evaluate the performance of our technique for prototype reconstruction on a random dataset and a benchmark dataset, handwriting Chinese characters. Experiments suggest that our technique considerably outperforms the existing graph-based methods.