Crowdsourcing Multimodal Dialog Interactions: Lessons Learned from the HALEF Case

Ramanarayanan, Vikram (Educational Testing Service) | Suendermann-Oeft, David (Educational Testing Service) | Molloy, Hillary (Educational Testing Service) | Tsuprun, Eugene (Educational Testing Service) | Lange, Patrick (Educational Testing Service) | Evanini, Keelan (Educational Testing Service)

AAAI Conferences 

The advent of multiple study on crowdsourcing for speech applications concluded crowdsourcing vendors and software infrastructure has that "although the crowd sometimes approached the level greatly helped this effort. Several providers also offer integrated of the experts, it never surpassed it" (Parent and Eskenazi filtering tools that allow users to customize different 2011)). This is exacerbated during multimodal dialog data aspects of their data collection, including target population, collections, where it becomes harder to quality-control for geographical location, demographics and sometimes usable audio-video data, due to a variety of factors including even education level and expertise. Managed crowdsourcing poor visual quality caused by variable lighting, position, providers extend these options by offering further customization or occlusions, participant or administrator error, or technical and end-to-end management of the entire data issues with the system or network (McDuff, Kaliouby, and collection operation.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found