detect violence
Uncovering Hidden Violent Tendencies in LLMs: A Demographic Analysis via Behavioral Vignettes
Large language models (LLMs) are increasingly proposed for detecting and responding to violent content online, yet their ability to reason about morally ambiguous, real-world scenarios remains underexamined. We present the first study to evaluate LLMs using a validated social science instrument designed to measure human response to everyday conflict, namely the Violent Behavior Vignette Questionnaire (VBVQ). To assess potential bias, we introduce persona-based prompting that varies race, age, and geographic identity within the United States. Six LLMs developed across different geopolitical and organizational contexts are evaluated under a unified zero-shot setting. Our study reveals two key findings: (1) LLMs surface-level text generation often diverges from their internal preference for violent responses; (2) their violent tendencies vary across demographics, frequently contradicting established findings in criminology, social science, and psychology.
- North America > United States > Florida > Miami-Dade County > Miami (0.14)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > Colorado (0.04)
- (9 more...)
AI cameras to detect violence on Sydney trains
CCTV cameras on Sydney's heavy rail network will be augmented with artificial intelligence over the next six months to automatically detect and report suspicious and violent incidents. Transport for NSW plans to trial the technology to analyse footage captured by the cameras, as part of a new initiative to improve safety for women travelling on public transport at night. It is just one of four winning ideas from the Safety After Dark Innovation Challenge, which offered applicants equity-free seed funding and support through TfNSW's digital accelerator. Researchers from the University of Wollongong's SMART Infrastructure Facility pitched the AI software, which can automatically analyse real-time camera feeds and alert operators. "The AI will be trained to detect incidents such as people fighting, a group of agitated persons, people following someone else, and arguments or other abnormal behaviour," SMART lecturer and team lead Johan Barthelemy said.