A Dataset for the Detection of Dehumanizing Language
Engelmann, Paul, Trolle, Peter Brunsgaard, Hardmeier, Christian
–arXiv.org Artificial Intelligence
Dehumanization can range from and Haslam (2006), where a sample is considered blatant to subtle forms of varying degrees (Bain dehumanizing if it contains at least one of the following et al., 2009), making automated, general detection categories: negative evaluation of a target difficult. Mendelsohn et al. (2020) present one of group, denial of agency, moral disgust, animal the first computational works on dehumanization metaphors, objectification. Animal metaphors and through explicit feature engineering, using lexicon objectification specifically relate to a human being and word embedding based approaches to detect compared to an animal or object with the intent dehumanizing associations across several years in to cause harm. Trigger Warning: This paper contains a New York Times corpus. Outside of this, there is examples of hateful content that some may little computational work on dehumanization.
arXiv.org Artificial Intelligence
Feb-13-2024
- Country:
- Europe > Denmark
- Capital Region > Copenhagen (0.05)
- North America > United States
- California (0.04)
- Illinois (0.04)
- Europe > Denmark
- Genre:
- Research Report (0.66)
- Industry:
- Government (0.47)
- Technology: