Goto

Collaborating Authors

 mturker


Supplementary Material for CLEVRER-Humans: Describing Physical and Causal Events the Human Way Jiayuan Mao MIT Xuelin Y ang

Neural Information Processing Systems

We bear all responsibility in case of violation of rights. The rest of this supplementary document is organized as the following. Next, in Section C, we describe the user interface for dataset collection. On average, we can obtain 29.4 descriptions per video, highlighting the advantage of our First, CLEVRER-Humans contains dense annotations of causal relations between physical events. The outer circle represents the general event families. We have lemmatized all verbs to remove the tense.


Supplementary Material for CLEVRER-Humans: Describing Physical and Causal Events the Human Way Jiayuan Mao MIT Xuelin Y ang

Neural Information Processing Systems

We bear all responsibility in case of violation of rights. The rest of this supplementary document is organized as the following. Next, in Section C, we describe the user interface for dataset collection. On average, we can obtain 29.4 descriptions per video, highlighting the advantage of our First, CLEVRER-Humans contains dense annotations of causal relations between physical events. The outer circle represents the general event families. We have lemmatized all verbs to remove the tense.


Crowdsourced human-based computational approach for tagging peripheral blood smear sample images from Sickle Cell Disease patients using non-expert users

Rubio, José María Buades, Moyà-Alcover, Gabriel, Jaume-i-Capó, Antoni, Petrović, Nataša

arXiv.org Artificial Intelligence

Supervised machine learning methods rely on tagged training data [1]. The more tagged training data that is available, the more accurately the model can learn to recognize patterns and generalize to unseen data. Crowdsourcing and Human-Based Computation (HBC) has become an increasingly popular approach for acquiring training labels in machine learning classification tasks, as it can be a cost-effective way to share the labeling effort among a large number of annotators. This approach can be particularly useful in cases where expert labeling is expensive or not feasible, or where a large amount of labeled data is needed to train a machine learning model [2]. There exist various tactics for human users to contribute their problem-solving skills [3]: Altruistic contribution: This strategy involves appealing to the altruistic nature of individuals willing to contribute their time and skills to solve problems for the common good [4-6]. Gamification: This strategy involves creating engaging and fun video games incorporating problem-solving tasks [7-9].


The Mechanical Turkness: Tactical Media Art and the Critique of Corporate AI

Grba, Dejan

arXiv.org Artificial Intelligence

The extensive industrialization of artificial intelligence (AI) since the mid-2010s has increasingly motivated artists to address its economic and sociopolitical consequences. In this chapter, I discuss interrelated art practices that thematize creative agency, crowdsourced labor, and delegated artmaking to reveal the social rootage of AI technologies and underline the productive human roles in their development. I focus on works whose poetic features indicate broader issues of contemporary AI-influenced science, technology, economy, and society. By exploring the conceptual, methodological, and ethical aspects of their effectiveness in disrupting the political regime of corporate AI, I identify several problems that affect their tactical impact and outline potential avenues for tackling the challenges and advancing the field.


"HOT" ChatGPT: The promise of ChatGPT in detecting and discriminating hateful, offensive, and toxic comments on social media

Li, Lingyao, Fan, Lizhou, Atreja, Shubham, Hemphill, Libby

arXiv.org Artificial Intelligence

Harmful content is pervasive on social media, poisoning online communities and negatively impacting participation. A common approach to address this issue is to develop detection models that rely on human annotations. However, the tasks required to build such models expose annotators to harmful and offensive content and may require significant time and cost to complete. Generative AI models have the potential to understand and detect harmful content. To investigate this potential, we used ChatGPT and compared its performance with MTurker annotations for three frequently discussed concepts related to harmful content: Hateful, Offensive, and Toxic (HOT). We designed five prompts to interact with ChatGPT and conducted four experiments eliciting HOT classifications. Our results show that ChatGPT can achieve an accuracy of approximately 80% when compared to MTurker annotations. Specifically, the model displays a more consistent classification for non-HOT comments than HOT comments compared to human annotations. Our findings also suggest that ChatGPT classifications align with provided HOT definitions, but ChatGPT classifies "hateful" and "offensive" as subsets of "toxic." Moreover, the choice of prompts used to interact with ChatGPT impacts its performance. Based on these in-sights, our study provides several meaningful implications for employing ChatGPT to detect HOT content, particularly regarding the reliability and consistency of its performance, its understand-ing and reasoning of the HOT concept, and the impact of prompts on its performance. Overall, our study provides guidance about the potential of using generative AI models to moderate large volumes of user-generated content on social media.