Plotting

 An, Jisun


Is ChatGPT better than Human Annotators? Potential and Limitations of ChatGPT in Explaining Implicit Hate Speech

arXiv.org Artificial Intelligence

Recent studies have alarmed that many online hate speeches are implicit. With its subtle nature, the explainability of the detection of such hateful speech has been a challenging problem. In this work, we examine whether ChatGPT can be used for providing natural language explanations (NLEs) for implicit hateful speech detection. We design our prompt to elicit concise ChatGPT-generated NLEs and conduct user studies to evaluate their qualities by comparison with human-written NLEs. We discuss the potential and limitations of ChatGPT in the context of implicit hateful speech research.


Chain of Explanation: New Prompting Method to Generate Higher Quality Natural Language Explanation for Implicit Hate Speech

arXiv.org Artificial Intelligence

The potential of sequence-to-sequence (Seq2Seq) models and prompting Recent studies have exploited advanced generative language models methods has not been fully explored [4]. Moreover, traditional evaluation to generate Natural Language Explanations (NLE) for why a certain metrics, such as BLEU [20] and Rouge [18], applied in NLE text could be hateful. We propose the Chain of Explanation (CoE) generation for hate speech, may also not be able to comprehensively Prompting method, using the heuristic words and target group, to capture the quality of the generated explanations because they generate high-quality NLE for implicit hate speech. We improved heavily rely on the word-level overlaps [3]. To fill those gaps, we the BLUE score from 44.0 to 62.3 for NLE generation by providing propose a Chain of Explanations (CoE) prompt method to generate accurate target information. We then evaluate the quality of generated high-quality NLE distinguishing the implicit hate speech from nonhateful NLE using various automatic metrics and human annotations tweets.


Reports of the Workshops Held at the 2018 International AAAI Conference on Web and Social Media

AI Magazine

The Workshop Program of the Association for the Advancement of Artificial Intelligenceโ€™s 12th International Conference on Web and Social Media (AAAI-18) was held at Stanford University, Stanford, California USA, on Monday, June 25, 2018. There were fourteen workshops in the program: Algorithmic Personalization and News: Risks and Opportunities; Beyond Online Data: Tackling Challenging Social Science Questions; Bridging the Gaps: Social Media, Use and Well-Being; Chatbot; Data-Driven Personas and Human-Driven Analytics: Automating Customer Insights in the Era of Social Media;ย  Designed Data for Bridging the Lab and the Field: Tools, Methods, and Challenges in Social Media Experiments; Emoji Understanding and Applications in Social Media; Event Analytics Using Social Media Data; Exploring Ethical Trade-Offs in Social Media Research; Making Sense of Online Data for Population Research; News and Public Opinion; Social Media and Health: A Focus on Methods for Linking Online and Offline Data; Social Web for Environmental and Ecological Monitoring and The ICWSM Science Slam. Workshops were held on the first day of the conference. Workshop participants met and discussed issues with a selected focus โ€” providing an informal setting for active exchange among researchers, developers, and users on topics of current interest. Organizers from nine of theย  workshops submitted reports, which are reproduced in this report. Brief summaries of the other five workshops have been reproduced from their website descriptions.


Anatomy of Online Hate: Developing a Taxonomy and Machine Learning Models for Identifying and Classifying Hate in Online News Media

AAAI Conferences

Online social media platforms generally attempt to mitigate hateful expressions, as these comments can be detrimental to the health of the community. However, automatically identifying hateful comments can be challenging. We manually label 5,143 hateful expressions posted to YouTube and Facebook videos among a dataset of 137,098 comments from an online news media. We then create a granular taxonomy of different types and targets of online hate and train machine learning models to automatically detect and classify the hateful comments in the full dataset. Our contribution is twofold: 1) creating a granular taxonomy for hateful online comments that includes both types and targets of hateful comments, and 2) experimenting with machine learning, including Logistic Regression, Decision Tree, Random Forest, Adaboost, and Linear SVM, to generate a multiclass, multilabel classification model that automatically detects and categorizes hateful comments in the context of online news media. We find that the best performing model is Linear SVM, with an average F1 score of 0.79 using TF-IDF features. We validate the model by testing its predictive ability, and, relatedly, provide insights on distinct types of hate speech taking place on social media.


Assessing the Accuracy of Four Popular Face Recognition Tools for Inferring Gender, Age, and Race

AAAI Conferences

In this research, we evaluate four widely used face detection tools, which are Face++, IBM Bluemix Visual Recognition, AWS Rekognition, and Microsoft Azure Face API, using multiple datasets to determine their accuracy in inferring user attributes, including gender, race, and age. Results show that the tools are generally proficient at determining gender, with accuracy rates greater than 90%, except for IBM Bluemix. Concerning race, only one of the four tools provides this capability, Face++, with an accuracy rate of greater than 90%, although the evaluation was performed on a high-quality dataset. Inferring age appears to be a challenging problem, as all four tools performed poorly. The findings of our quantitative evaluation are helpful for future computational social science research using these tools, as their accuracy needs to be taken into account when applied to classifying individuals on social media and other contexts. Triangulation and manual verification are suggested for researchers employing these tools.


Revealing the Hidden Patterns of News Photos: Analysis of Millions of News Photos through GDELT and Deep Learning-based Vision APIs

AAAI Conferences

In this work, we analyze more than two million news photos published in January 2016. We demonstrate i) which objects appear the most in news photos; ii) what the sentiments of news photos are; iii) whether the sentiment of news photos is aligned with the tone of the text; iv) how gender is treated; and v) how differently political candidates are portrayed. To our best knowledge, this is the first large-scale study of news photo contents using deep learning-based vision APIs.