SafeSpeech: A Comprehensive and Interactive Tool for Analysing Sexist and Abusive Language in Conversations
Tan, Xingwei, Lyu, Chen, Umer, Hafiz Muhammad, Khan, Sahrish, Parvatham, Mahathi, Arthurs, Lois, Cullen, Simon, Wilson, Shelley, Jhumka, Arshad, Pergola, Gabriele
–arXiv.org Artificial Intelligence
Detecting toxic language including sexism, harassment and abusive behaviour, remains a critical challenge, particularly in its subtle and context-dependent forms. Existing approaches largely focus on isolated message-level classification, overlooking toxicity that emerges across conversational contexts. To promote and enable future research in this direction, we introduce SafeSpeech, a comprehensive platform for toxic content detection and analysis that bridges message-level and conversation-level insights. The platform integrates fine-tuned classifiers and large language models (LLMs) to enable multi-granularity detection, toxic-aware conversation summarization, and persona profiling. SafeSpeech also incorporates explainability mechanisms, such as perplexity gain analysis, to highlight the linguistic elements driving predictions. Evaluations on benchmark datasets, including EDOS, OffensEval, and HatEval, demonstrate the reproduction of state-of-the-art performance across multiple tasks, including fine-grained sexism detection.
arXiv.org Artificial Intelligence
Mar-9-2025
- Country:
- Asia > Middle East
- UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- Europe (1.00)
- North America > United States
- Minnesota > Hennepin County > Minneapolis (0.14)
- Asia > Middle East
- Genre:
- Research Report (0.64)
- Industry:
- Information Technology > Security & Privacy (0.93)
- Law (0.93)
- Technology: