Goto

Collaborating Authors

Learning a Concept Hierarchy from Multi-labeled Documents

Neural Information Processing Systems

While topic models can discover patterns of word usage in large corpora, it is difficult to meld this unsupervised structure with noisy, human-provided labels, especially when the label space is large. In this paper, we present a model-Label to Hierarchy (L2H)-that can induce a hierarchy of user-generated labels and the topics associated with those labels from a set of multi-labeled documents. The model is robust enough to account for missing labels from untrained, disparate annotators and provide an interpretable summary of an otherwise unwieldy label set. We show empirically the effectiveness of L2H in predicting held-out words and labels for unseen documents. Papers published at the Neural Information Processing Systems Conference.


Hand labeling is the past. The future is #NoLabel AI 7wData

#artificialintelligence

Data labeling is so hot right now… but could this rapidly emerging market face disruption from a small team at Stanford and the Snorkel open source project, which enables highly efficient programmatic labeling that is 10 to 1,000x as efficient as hand labeling? We are witnessing a data labeling market explosion: labeling platforms have hit prime time. S&P Global released an October 11 report entitled *Avoiding Garbage in Machine Learning* in which it termed unlabeled data "garbage data" to highlight the importance of labeling in AI. The Economist recently noted that while spending on AI is growing from $38bn this year to $98bn in 2023, only 1 in 5 companies interested in AI has deployed Machine Learning models because of a shortage of labeled data. This is why "the market for data-labeling services may triple to $5bn by 2023."


Learning from Complementary Labels

Neural Information Processing Systems

Collecting labeled data is costly and thus a critical bottleneck in real-world classification tasks. To mitigate this problem, we propose a novel setting, namely learning from complementary labels for multi-class classification. A complementary label specifies a class that a pattern does not belong to. Collecting complementary labels would be less laborious than collecting ordinary labels, since users do not have to carefully choose the correct class from a long list of candidate classes. However, complementary labels are less informative than ordinary labels and thus a suitable approach is needed to better learn from them.


Learning Hybrid Models for Image Annotation with Partially Labeled Data

Neural Information Processing Systems

Extensive labeled data for image annotation systems, which learn to assign class labels to image regions, is difficult to obtain. We explore a hybrid model framework for utilizing partially labeled data that integrates a generative topic model for image appearance with discriminative label prediction. We propose three alternative formulations for imposing a spatial smoothness prior on the image labels. Tests of the new models and some baseline approaches on two real image datasets demonstrate the effectiveness of incorporating the latent structure. Papers published at the Neural Information Processing Systems Conference.


New Machine Learning Research on Learning With Limited Labeled Data

#artificialintelligence

Cloudera Fast Forward Labs unveils a machine learning capability that opens up product possibilities. The new research report covers: An introduction to machine learning with limited labeled data for business and technical audiences Practical advice on machine learning - specifically around active learning, an approach that takes advantage of collaboration between humans and machines to smartly pick a small subset of data to be labeled Implications from both technical and ethical perspectives, availability of tooling and a maturing supporting ecosystem In addition to the new report, the interactive web prototype that demonstrates various active learning strategies on multiple image datasets, is available.