Collaborating Authors

WLV-RIT at SemEval-2021 Task 5: A Neural Transformer Framework for Detecting Toxic Spans Artificial Intelligence

In recent years, the widespread use of social media has led to an increase in the generation of toxic and offensive content on online platforms. In response, social media platforms have worked on developing automatic detection methods and employing human moderators to cope with this deluge of offensive content. While various state-of-the-art statistical models have been applied to detect toxic posts, there are only a few studies that focus on detecting the words or expressions that make a post offensive. This motivates the organization of the SemEval-2021 Task 5: Toxic Spans Detection competition, which has provided participants with a dataset containing toxic spans annotation in English posts. In this paper, we present the WLV-RIT entry for the SemEval-2021 Task 5. Our best performing neural transformer model achieves an $0.68$ F1-Score. Furthermore, we develop an open-source framework for multilingual detection of offensive spans, i.e., MUDES, based on neural transformers that detect toxic spans in texts.

Exploring multi-task multi-lingual learning of transformer models for hate speech and offensive speech identification in social media Artificial Intelligence

Thus, social media platforms are often held responsible for framing the views and opinions of a large number of people (Duggan et al., 2017). However, this freedom to voice our opinion has been challenged by the increase in the use of hate speech (Mondal et al., 2017). The anonymity of the internet grants people the power to completely change the context of a discussion and suppress a person's personal opinion (Sticca and Perren, 2013). These hateful posts and comments not only affect the society at a micro scale but also at a global level by influencing people's views regarding important global events like elections, and protests (Duggan et al., 2017). Given the volume of online communication happening on various social media platforms and the need for more fruitful communication, there is a growing need to automate the detection of hate speech. For the scope of this paper we adopt the definition of hate speech and offensive speech as defined in the Mandl et al. (2019) as "insulting, hurtful, derogatory, or obscene content directed from one person to another person" (quoted from (Mandl et al., 2019)). In order to automate hate speech detection the Natural Language Processing (NLP) community has made significant progress which has been accelerated by organization of numerous shared tasks aimed at identifying hate speech (Mandl et al., 2019; Kumar et al., 2020, 2018).

Leveraging Multilingual Transformers for Hate Speech Detection Artificial Intelligence

Detecting and classifying instances of hate in social media text has been a problem of interest in Natural Language Processing in the recent years. Our work leverages state of the art Transformer language models to identify hate speech in a multilingual setting. Capturing the intent of a post or a comment on social media involves careful evaluation of the language style, semantic content and additional pointers such as hashtags and emojis. In this paper, we look at the problem of identifying whether a Twitter post is hateful and offensive or not. We further discriminate the detected toxic content into one of the following three classes: (a) Hate Speech (HATE), (b) Offensive (OFFN) and (c) Profane (PRFN). With a pre-trained multilingual Transformer-based text encoder at the base, we are able to successfully identify and classify hate speech from multiple languages. On the provided testing corpora, we achieve Macro F1 scores of 90.29, 81.87 and 75.40 for English, German and Hindi respectively while performing hate speech detection and of 60.70, 53.28 and 49.74 during fine-grained classification. In our experiments, we show the efficacy of Perspective API features for hate speech classification and the effects of exploiting a multilingual training scheme. A feature selection study is provided to illustrate impacts of specific features upon the architecture's classification head.

WLV-RIT at HASOC-Dravidian-CodeMix-FIRE2020: Offensive Language Identification in Code-switched YouTube Comments Artificial Intelligence

This paper describes the WLV-RIT entry to the Hate Speech and Offensive Content Identification in Indo-European Languages (HASOC) shared task 2020. The HASOC 2020 organizers provided participants with annotated datasets containing social media posts of code-mixed in Dravidian languages (Malayalam-English and Tamil-English). We participated in task 1: Offensive comment identification in Code-mixed Malayalam Youtube comments. In our methodology, we take advantage of available English data by applying cross-lingual contextual word embeddings and transfer learning to make predictions to Malayalam data. We further improve the results using various fine tuning strategies. Our system achieved 0.89 weighted average F1 score for the test set and it ranked 5th place out of 12 participants.

FBERT: A Neural Transformer for Identifying Offensive Content Artificial Intelligence

Transformer-based models such as BERT, XLNET, and XLM-R have achieved state-of-the-art performance across various NLP tasks including the identification of offensive language and hate speech, an important problem in social media. In this paper, we present fBERT, a BERT model retrained on SOLID, the largest English offensive language identification corpus available with over $1.4$ million offensive instances. We evaluate fBERT's performance on identifying offensive content on multiple English datasets and we test several thresholds for selecting instances from SOLID. The fBERT model will be made freely available to the community.