Goto

Collaborating Authors

Bayesian Bias Mitigation for Crowdsourcing

Neural Information Processing Systems

Biased labelers are a systemic problem in crowdsourcing, and a comprehensive toolbox for handling their responses is still being developed. A typical crowdsourcing application can be divided into three steps: data collection, data curation, and learning. At present these steps are often treated separately. We present Bayesian Bias Mitigation for Crowdsourcing (BBMC), a Bayesian model to unify all three. Most data curation methods account for the {\it effects} of labeler bias by modeling all labels as coming from a single latent truth. Our model captures the {\it sources} of bias by describing labelers as influenced by shared random effects. This approach can account for more complex bias patterns that arise in ambiguous or hard labeling tasks and allows us to merge data curation and learning into a single computation. Active learning integrates data collection with learning, but is commonly considered infeasible with Gibbs sampling inference. We propose a general approximation strategy for Markov chains to efficiently quantify the effect of a perturbation on the stationary distribution and specialize this approach to active learning. Experiments show BBMC to outperform many common heuristics.


Active Learning for Crowdsourcing Using Knowledge Transfer

AAAI Conferences

This paper studies the active learning problem in crowdsourcing settings, where multiple imperfect annotators with varying levels of expertise are available for labeling the data in a given task. Annotations collected from these labelers may be noisy and unreliable, and the quality of labeled data needs to be maintained for data mining tasks. Previous solutions have attempted to estimate individual users' reliability based on existing knowledge in each task, but for this to be effective each task requires a large quantity of labeled data to provide accurate estimates. In practice, annotation budgets for a given task are limited, so each instance can be presented to only a few users, each of whom can only label a few examples. To overcome data scarcity we propose a new probabilistic model that transfers knowledge from abundant unlabeled data in auxiliary domains to help estimate labelers' expertise. Based on this model we present a novel active learning algorithm that: a) simultaneously selects the most informative example and b) queries its label from the labeler with the best expertise. Experiments on both text and image datasets demonstrate that our proposed method outperforms other state-of-the-art active learning methods.


Pattern Curators of the Cognitive Era

#artificialintelligence

Machine learning has a critical dependency on human learning. I'm not referring to data scientists, a class of learned humans who play an undeniably pivotal role in this new era. What I'm referring to are the legions of individuals who prepare training data to guide algorithms in their search for patterns of interest. Once the target patterns have been tagged and flagged by humans in the know, machine learning and other artificial intelligence (AI) algorithms can work their magic. Does it make sense to demean this job category that's essential to the cognitive era?


5 Approaches to Data Labeling for Machine Learning Projects

#artificialintelligence

The quality of a machine learning project comes down to how you handle three important factors: data collection, data preprocessing, and data labeling. Data labeling is integral because it's literally labeling the data that will teach your model to learn its task. However, data labeling is often time consuming and complex. For example, image recognition systems often require bounding boxes drawn around specific objects, while product recommendation and sentiment analysis systems can require complex cultural knowledge for accurate data labeling. And don't forget that a dataset could contain tens of thousands of samples in need of labeling, if not more.


A Full Probabilistic Model for Yes/No Type Crowdsourcing in Multi-Class Classification

arXiv.org Machine Learning

Crowdsourcing has become widely used in supervised scenarios where training sets are scarce and difficult to obtain. Most crowdsourcing models in the literature assume labelers can provide answers to full questions. In classification contexts, full questions require a labeler to discern among all possible classes. Unfortunately, discernment is not always easy in realistic scenarios. Labelers may not be experts in differentiating all classes. In this work, we provide a full probabilistic model for a shorter type of queries. Our shorter queries only require "yes" or "no" responses. Our model estimates a joint posterior distribution of matrices related to labelers' confusions and the posterior probability of the class of every object. We developed an approximate inference approach, using Monte Carlo Sampling and Black Box Variational Inference, which provides the derivation of the necessary gradients. We built two realistic crowdsourcing scenarios to test our model. The first scenario queries for irregular astronomical time-series. The second scenario relies on the image classification of animals. We achieved results that are comparable with those of full query crowdsourcing. Furthermore, we show that modeling labelers' failures plays an important role in estimating true classes. Finally, we provide the community with two real datasets obtained from our crowdsourcing experiments. All our code is publicly available.