Goto

Collaborating Authors

 Carter, Steele


It’s Not Just What You Say, But How You Say It: Muiltimodal Sentiment Analysis Via Crowdsourcing

AAAI Conferences

This paper examines the effect of various modalities of expression on the reliability of crowdsourced sentiment polarity judgments. A novel corpus of YouTube video reviews was created, and sentiment judgments were obtained via Amazon Mechanical Turk. We created a system for isolating text, video, and audio modalities from YouTube videos to ensure that annotators could only see the particular modality or modalities being evaluated. Reliability of judgments was assessed using Fleiss Kappa inter-annotator agreement values. We found that the audio only modality produced the most reliable judgments for video fragments and that across modalities video fragments are less ambiguous than full videos.


Job Complexity and User Attention in Crowdsourcing Microtasks

AAAI Conferences

This paper examines the importance of presenting simple, intuitive tasks when conducting microtasking on crowdsourcing platforms. Most crowdsourcing platforms allow the maker of a task to present any length of instructions to crowd workers who participate in their tasks. Our experiments show, however, most workers who participate in crowdsourcing microtasks do not read the instructions, even when they are very brief. To facilitate success in microtask design, we highlight the importance of making simple, easy to grasp tasks that do not rely on instructions for explanation.