Supervised Learning
On Structured Prediction Theory with Calibrated Convex Surrogate Losses
Osokin, Anton, Bach, Francis, Lacoste-Julien, Simon
We provide novel theoretical insights on structured prediction in the context of efficient convex surrogate loss minimization with consistency guarantees. For any task loss, we construct a convex surrogate that can be optimized via stochastic gradient descent and we prove tight bounds on the so-called "calibration function" relating the excess surrogate risk to the actual risk. In contrast to prior related work, we carefully monitor the effect of the exponential number of classes in the learning guarantees as well as on the optimization complexity. As an interesting consequence, we formalize the intuition that some task losses make learning harder than others, and that the classical 0-1 loss is ill-suited for structured prediction.
Aggressive Sampling for Multi-class to Binary Reduction with Applications to Text Classification
Joshi, Bikash, Amini, Massih R., Partalas, Ioannis, Iutzeler, Franck, Maximov, Yury
We address the problem of multi-class classification in the case where the number of classes is very large. We propose a double sampling strategy on top of a multi-class to binary reduction strategy, which transforms the original multi-class problem into a binary classification problem over pairs of examples. The aim of the sampling strategy is to overcome the curse of long-tailed class distributions exhibited in majority of large-scale multi-class classification problems and to reduce the number of pairs of examples in the expanded data. We show that this strategy does not alter the consistency of the empirical risk minimization principle defined over the double sample reduction. Experiments are carried out on DMOZ and Wikipedia collections with 10,000 to 100,000 classes where we show the efficiency of the proposed approach in terms of training and prediction time, memory consumption, and predictive performance with respect to state-of-the-art approaches.
PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space
Qi, Charles Ruizhongtai, Yi, Li, Su, Hao, Guibas, Leonidas J.
Few prior works study deep learning on point sets. PointNet [20] is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds.
PRUNE: Preserving Proximity and Global Ranking for Network Embedding
Lai, Yi-An, Hsu, Chin-Chi, Chen, Wen Hao, Yeh, Mi-Yen, Lin, Shou-De
We investigate an unsupervised generative approach for network embedding. A multi-task Siamese neural network structure is formulated to connect embedding vectors and our objective to preserve the global node ranking and local proximity of nodes. We provide deeper analysis to connect the proposed proximity objective to link prediction and community detection in the network. We show our model can satisfy the following design properties: scalability, asymmetry, unity and simplicity. Experiment results not only verify the above design properties but also demonstrate the superior performance in learning-to-rank, classification, regression, and link prediction tasks.
Adversarial Structured Prediction for Multivariate Measures
Wang, Hong, Rezaei, Ashkan, Ziebart, Brian D.
Many predicted structured objects (e.g., sequences, matchings, trees) are evaluated using the F-score, alignment error rate (AER), or other multivariate performance measures. Since inductively optimizing these measures using training data is typically computationally difficult, empirical risk minimization of surrogate losses is employed, using, e.g., the hinge loss for (structured) support vector machines. These approximations often introduce a mismatch between the learner's objective and the desired application performance, leading to inconsistency. We take a different approach: adversarially approximate training data while optimizing the exact F-score or AER. Structured predictions under this formulation result from solving zero-sum games between a predictor seeking the best performance and an adversary seeking the worst while required to (approximately) match certain structured properties of the training data. We explore this approach for word alignment (AER evaluation) and named entity recognition (F-score evaluation) with linear-chain constraints.
Fast kNN mode seeking clustering applied to active learning
Duin, Robert P. W., Verzakov, Sergey
A significantly faster algorithm is presented for the original kNN mode seeking procedure. It has the advantages over the well-known mean shift algorithm that it is feasible in high-dimensional vector spaces and results in uniquely, well defined modes. Moreover, without any additional computational effort it may yield a multi-scale hierarchy of clusterings. The time complexity is just O(n^1.5). resulting computing times range from seconds for 10^4 objects to minutes for 10^5 objects and to less than an hour for 10^6 objects. The space complexity is just O(n). The procedure is well suited for finding large sets of small clusters and is thereby a candidate to analyze thousands of clusters in millions of objects. The kNN mode seeking procedure can be used for active learning by assigning the clusters to the class of the modal objects of the clusters. Its feasibility is shown by some examples with up to 1.5 million handwritten digits. The obtained classification results based on the clusterings are compared with those obtained by the nearest neighbor rule and the support vector classifier based on the same labeled objects for training. It can be concluded that using the clustering structure for classification can be significantly better than using the trained classifiers. A drawback of using the clustering for classification, however, is that no classifier is obtained that may be used for out-of-sample objects.
Supreme Court Ruling in McDonnell Case Sets Louisiana Congressman William Jefferson Free
Not long after Jefferson was put behind bars, another political corruption case blossomed, this time around Virginia Gov. Robert McDonnell and his wife, Maureen. The McDonnells reportedly accepted more than $175,000 in loans and gifts from a local businessman in return for his exposure to state officials and industry leaders. McDonnell and his wife were indicted and convicted after the governor left office in January 2014.
Learning to Rank based on Analogical Reasoning
Fahandar, Mohsen Ahmadi, Hüllermeier, Eyke
Object ranking or "learning to rank" is an important problem in the realm of preference learning. On the basis of training data in the form of a set of rankings of objects represented as feature vectors, the goal is to learn a ranking function that predicts a linear order of any new set of objects. In this paper, we propose a new approach to object ranking based on principles of analogical reasoning. More specifically, our inference pattern is formalized in terms of so-called analogical proportions and can be summarized as follows: Given objects $A,B,C,D$, if object $A$ is known to be preferred to $B$, and $C$ relates to $D$ as $A$ relates to $B$, then $C$ is (supposedly) preferred to $D$. Our method applies this pattern as a main building block and combines it with ideas and techniques from instance-based learning and rank aggregation. Based on first experimental results for data sets from various domains (sports, education, tourism, etc.), we conclude that our approach is highly competitive. It appears to be specifically interesting in situations in which the objects are coming from different subdomains, and which hence require a kind of knowledge transfer.
Inductive Representation Learning in Large Attributed Graphs
Ahmed, Nesreen K., Rossi, Ryan A., Zhou, Rong, Lee, John Boaz, Kong, Xiangnan, Willke, Theodore L., Eldardiry, Hoda
Graphs (networks) are ubiquitous and allow us to model entities (nodes) and the dependencies (edges) between them. Learning a useful feature representation from graph data lies at the heart and success of many machine learning tasks such as classification, anomaly detection, link prediction, among many others. Many existing techniques use random walks as a basis for learning features or estimating the parameters of a graph model for a downstream prediction task. Examples include recent node embedding methods such as DeepWalk, node2vec, as well as graph-based deep learning algorithms. However, the simple random walk used by these methods is fundamentally tied to the identity of the node. This has three main disadvantages. First, these approaches are inherently transductive and do not generalize to unseen nodes and other graphs. Second, they are not space-efficient as a feature vector is learned for each node which is impractical for large graphs. Third, most of these approaches lack support for attributed graphs. To make these methods more generally applicable, we propose a framework for inductive network representation learning based on the notion of attributed random walk that is not tied to node identity and is instead based on learning a function $\Phi : \mathrm{\rm \bf x} \rightarrow w$ that maps a node attribute vector $\mathrm{\rm \bf x}$ to a type $w$. This framework serves as a basis for generalizing existing methods such as DeepWalk, node2vec, and many other previous methods that leverage traditional random walks.