Goto

Collaborating Authors

 Dumitrache, Anca


Reports of the Workshops Held at the Sixth AAAI Conference on Human Computation and Crowdsourcing

AI Magazine

The Workshop Program of the Association for the Advancement of Artificial Intelligence’s Sixth AAAI Conference on Human Computation and Crowdsourcing was held on the campus of the University of Zurich in Zurich, Switzerland on 5 July 2018. There were three full-day workshops in the program: CrowdBias: Disentangling the Relation between Crowdsourcing and Bias Management; Subjectivity, Ambiguity, and Disagreement in Crowdsourcing; Work in the Age of Intelligent Machines; a three-quarter day workshop, Advancing Human Computation with Complexity Science; and Project Networking; and a quarter day Project Networking workshop. This report contains summaries of three of the events.  


Crowdsourcing Semantic Label Propagation in Relation Classification

arXiv.org Artificial Intelligence

Distant supervision is a popular method for performing relation extraction from text that is known to produce noisy labels. Most progress in relation extraction and classification has been made with crowdsourced corrections to distant-supervised labels, and there is evidence that indicates still more would be better. In this paper, we explore the problem of propagating human annotation signals gathered for open-domain relation classification through the CrowdTruth methodology for crowdsourcing, that captures ambiguity in annotations by measuring inter-annotator disagreement. Our approach propagates annotations to sentences that are similar in a low dimensional embedding space, expanding the number of labels by two orders of magnitude. Our experiments show significant improvement in a sentence-level multi-class relation classifier.


Capturing Ambiguity in Crowdsourcing Frame Disambiguation

AAAI Conferences

FrameNet is a computational linguistics resource composed of semantic frames, high-level concepts that represent the meanings of words. In this paper, we present an approach to gather frame disambiguation annotations in sentences using a crowdsourcing approach with multiple workers per sentence to capture inter-annotator disagreement . We perform an experiment over a set of 433 sentences annotated with frames from the FrameNet corpus, and show that the aggregated crowd annotations achieve an F1 score greater than 0.67 as compared to expert linguists. We highlight cases where the crowd annotation was correct even though the expert is in disagreement, arguing for the need to have multiple annotators per sentence.  Most importantly, we examine cases in which crowd workers could not agree, and demonstrate that these cases exhibit ambiguity, either in the sentence, frame, or the task itself, and argue that collapsing such cases to a single, discrete truth value (i.e. correct or incorrect) is inappropriate, creating arbitrary targets for machine learning.