Goto

Collaborating Authors

 Unsupervised or Indirectly Supervised Learning


On Consistency of Graph-based Semi-supervised Learning

arXiv.org Machine Learning

Graph-based semi-supervised learning is one of the most popular methods in machine learning. Some of its theoretical properties such as bounds for the generalization error and the convergence of the graph Laplacian regularizer have been studied in computer science and statistics literatures. However, a fundamental statistical property, the consistency of the estimator from this method has not been proved. In this article, we study the consistency problem under a non-parametric framework. We prove the consistency of graph-based learning in the case that the estimated scores are enforced to be equal to the observed responses for the labeled data. The sample sizes of both labeled and unlabeled data are allowed to grow in this result. When the estimated scores are not required to be equal to the observed responses, a tuning parameter is used to balance the loss function and the graph Laplacian regularizer. We give a counterexample demonstrating that the estimator for this case can be inconsistent. The theoretical findings are supported by numerical studies.



Unsupervised Learning in Python

#artificialintelligence

Say you have a collection of customers with a variety of characteristics such as age, location, and financial history, and you wish to discover patterns and sort them into clusters. Or perhaps you have a set of texts, such as wikipedia pages, and you wish to segment them into categories based on their content. This is the world of unsupervised learning, called as such because you are not guiding, or supervising, the pattern discovery by some prediction task, but instead uncovering hidden structure from unlabeled data. Unsupervised learning encompasses a variety of techniques in machine learning, from clustering to dimension reduction to matrix factorization. In this course, you'll learn the fundamentals of unsupervised learning and implement the essential algorithms using scikit-learn and scipy.


Unsupervised learning of 3D structure from images

#artificialintelligence

Earlier this week we looked at how deep nets can learn intuitive physics given an input of objects and the relations between them. If only there was some way to look at a 2D scene (e.g., an image from a camera) and build a 3D model of the objects in it and their relationshipsโ€ฆ Today's paper choice is a big step in that direction, learning the 3D structure of objects from 2D observations. The 2D projection of a scene is a complex function of the attributes and positions of the camera, lights and objects that make up the scene. If endowed with 3D understanding agents can abstract away from this complexity to form stable disentangled representations, e.g., recognizing that a chair is a chair whether seen from above or from the side, under different lighting conditions, or under partial occlusion. Moreover, such representations would allow agents to determine downstream properties of these elements more easily and with less training, e.g., enabling intuitive physical reasoningโ€ฆ The approach described is this paper uses an unsupervised deep learning end-to-end model and "demonstrates for the first time the feasibility of learning to infer 3D representations of the world in a purely unsupervised manner."


Semi-supervised Learning for Discrete Choice Models

arXiv.org Machine Learning

We introduce a semi-supervised discrete choice model to calibrate discrete choice models when relatively few requests have both choice sets and stated preferences but the majority only have the choice sets. Two classic semi-supervised learning algorithms, the expectation maximization algorithm and the cluster-and-label algorithm, have been adapted to our choice modeling problem setting. We also develop two new algorithms based on the cluster-and-label algorithm. The new algorithms use the Bayesian Information Criterion to evaluate a clustering setting to automatically adjust the number of clusters. Two computational studies including a hotel booking case and a large-scale airline itinerary shopping case are presented to evaluate the prediction accuracy and computational effort of the proposed algorithms. Algorithmic recommendations are rendered under various scenarios.


Real-Time Indoor Localization in Smart Homes Using Semi-Supervised Learning

AAAI Conferences

Long-term automated monitoring of residential or small in- dustrial properties is an important task within the broader scope of human activity recognition. We present a device- free wifi-based localization system for smart indoor spaces, developed in a collaboration between McGill University and Aerห†ฤฑal Technologies. The system relies on existing wifi net- work signals and semi-supervised learning, in order to au- tomatically detect entrance into a residential unit, and track the location of a moving subject within the sensing area. The implemented real-time monitoring platform works by detect- ing changes in the characteristics of the wifi signals collected via existing off-the-shelf wifi-enabled devices in the environ- ment. This platform has been deployed in several apartments in the Montreal area, and the results obtained show the poten- tial of this technology to turn any regular home with an ex- isting wifi network into a smart home equipped with intruder alarm and room-level location detector. The machine learn- ing component has been devised so as to minimize the need for user annotation and overcome temporal instabilities in the input signals. We use a semi-supervised learning framework which works in two phases. First, we build a base learner for mapping wifi signals to different physical locations in the en- vironment from a small amount of labeled data; during its lifetime, the learner automatically re-trains when the uncer- tainty level rises significantly, without the need for further supervision. This paper describes the technical and practical issues arising in the design and implementation of such a sys- tem for real residential units, and illustrates its performance during on-going deployment.


Universum Prescription: Regularization Using Unlabeled Data

AAAI Conferences

This paper shows that simply prescribing "none of the above" labels to unlabeled data has a beneficial regularization effect to supervised learning. We call it universum prescription by the fact that the prescribed labels cannot be one of the supervised labels. In spite of its simplicity, universum prescription obtained competitive results in training deep convolutional networks for CIFAR-10, CIFAR-100, STL-10 and ImageNet datasets. A qualitative justification of these approaches using Rademacher complexity is presented. The effect of a regularization parameter โ€” probability of sampling from unlabeled data โ€” is also studied empirically.


Fast Generalized Distillation for Semi-Supervised Domain Adaptation

AAAI Conferences

Semi-supervised domain adaptation (SDA) is a typical setting when we face the problem of domain adaptation in real applications. How to effectively utilize the unlabeled data is an important issue in SDA. Previous work requires access to the source data to measure the data distribution mismatch, which is ineffective when the size of the source data is relatively large. In this paper, we propose a new paradigm, called Generalized Distillation Semi-supervised Domain Adaptation (GDSDA). We show that without accessing the source data, GDSDA can effectively utilize the unlabeled data to transfer the knowledge from the source models. Then we propose GDSDA-SVM which uses SVM as the base classifier and can efficiently solve the SDA problem. Experimental results show that GDSDA-SVM can effectively utilize the unlabeled data to transfer the knowledge between different domains under the SDA setting.


Unsupervised Learning of Multi-Level Descriptors for Person Re-Identification

AAAI Conferences

In this paper, we propose a novel coding method named weighted linear coding (WLC) to learn multi-level (e.g., pixel-level, patch-level and image-level) descriptors from raw pixel data in an unsupervised manner. It guarantees the property of saliency with a similarity constraint. The resulting multi-level descriptors have a good balance between the robustness and distinctiveness. Based on WLC, all data from the same region can be jointly encoded. Consequently, when we extract the holistic image features, it is able to preserve the spatial consistency. Furthermore, we apply PCA to these features and compact person representations are then achieved. During the stage of matching persons, we exploit the complementary information resided in multi-level descriptors via a score-level fusion strategy. Experiments on the challenging person re-identification datasets - VIPeR and CUHK 01, demonstrate the effectiveness of our method.


Semi-Supervised Classifications via Elastic and Robust Embedding

AAAI Conferences

Transductive semi-supervised learning can only predict labels for unlabeled data appearing in training data, and can not predict labels for testing data never appearing in training set. To handle this out-of-sample problem, many inductive methods make a constraint such that the predicted label matrix should be exactly equal to a linear model. In practice, this constraint might be too rigid to capture the manifold structure of data. In this paper, we relax this rigid constraint and propose to use an elastic constraint on the predicted label matrix such that the manifold structure can be better explored. Moreover, since unlabeled data are often very abundant in practice and usually there are some outliers, we use a non-squared loss instead of the traditional squared loss to learn a robust model. The derived problem, although is convex, has so many nonsmooth terms, which make it very challenging to solve. In the paper, we propose an efficient optimization algorithm to solve a more general problem, based on which we find the optimal solution to the derived problem.