WLV-RIT at GermEval 2021: Multitask Learning with Transformers to Detect Toxic, Engaging, and Fact-Claiming Comments
Morgan, Skye, Ranasinghe, Tharindu, Zampieri, Marcos
–arXiv.org Artificial Intelligence
At the same time, social media sites have 2020). It is well-known that training large neural increasingly become more prone to offensive content transformer models often result in long processing (Hada et al., 2021; Zhu and Bhat, 2021; Bucur times. As GermEval-2021 features three related et al., 2021). As such, identifying the toxic language tasks, from a performance standpoint, we pose that in social media is a topic that has gained, training a model jointly on three tasks is likely to be and continues to gain traction. Research surrounding computationally more efficient than training three the problem of offensive content has centered models in isolation. Moreover, as GermEval-2021 around the application of computational models provides a single dataset for the three tasks, MTL that can identify various forms of negative content can also be used to help improving performance such as hate speech (Malmasi and Zampieri, 2018; across tasks. As such, we introduce multitask learning Nozza, 2021), abuse (Corazza et al., 2020), aggression whereby one model can predict all three tasks (Kumar et al., 2018, 2020), and cyber-bullying as an alternative approach.
arXiv.org Artificial Intelligence
Jul-30-2021