offender
Child abuse increasing and more complex to police, crime agency says
Child sex abuse is becoming increasingly complex to police and officers are arresting an average of 1,000 potential offenders each month, the National Crime Agency (NCA) says. It says an increasing reliance on online platforms and advances in technology, such as AI image creation, are exacerbating the problem, with algorithms and digital communities connecting offenders to share and promote child sex abuse material. According to the NCA, the number of arrests has roughly doubled in the past three years. Statistically, potential offenders are in every community and victims in every school, the NCA said. It added that police cannot address the issue alone and called on technology companies to do more.
- North America > United States (0.16)
- North America > Central America (0.15)
- Oceania > Australia (0.06)
- (11 more...)
- Leisure & Entertainment (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- Government > Regional Government > Europe Government > United Kingdom Government (0.69)
- North America > United States > New York > Richmond County > New York City (0.14)
- North America > United States > New York > Queens County > New York City (0.14)
- North America > United States > New York > New York County > New York City (0.14)
- (23 more...)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
AI chatbots could help stop prisoner release errors, says justice minister
HMP Wandsworth gets green light to use AI after team sent in to find'quick fixes' after spate of mistakes Artificial intelligence chatbots could be used to stop prisoners from being mistakenly released from jail, a justice minister told the House of Lords on Monday. James Timpson said HMP Wandsworth had been given the green light to use AI after a specialised team was sent in to find "some quick fixes". A double manhunt was launched last week after the incorrect release of a sex offender and a fraudster from the prison in south-west London. Release errors over the past fortnight have been seized upon by opposition MPs as evidence of the helplessness of ministers in the face of chaos within the criminal justice system. David Lammy, the justice secretary, is expected to address parliament about the number of missing prisoners when MPs return on Tuesday. It is understood that AI could be used to read and process paper documents; help staff cross-reference names to ensure that inmates are no longer hiding their past crimes behind aliases; merge different datasets; and calculate release dates and sentences.
- Europe > United Kingdom > England > Greater London > London (0.35)
- North America > United States (0.15)
- Europe > United Kingdom > Wales (0.06)
- (5 more...)
What Are the Facts? Automated Extraction of Court-Established Facts from Criminal-Court Opinions
Bendová, Klára, Knap, Tomáš, Černý, Jan, Pour, Vojtěch, Savelka, Jaromir, Kvapilíková, Ivana, Drápal, Jakub
Criminal justice administrative data contain only a limited amount of information about the committed offense. However, there is an unused source of extensive information in continental European courts' decisions: descriptions of criminal behaviors in verdicts by which offenders are found guilty. In this paper, we study the feasibility of extracting these descriptions from publicly available court decisions from Slovakia. We use two different approaches for retrieval: regular expressions and large language models (LLMs). Our baseline was a simple method employing regular expressions to identify typical words occurring before and after the description. The advanced regular expression approach further focused on "sparing" and its normalization (insertion of spaces between individual letters), typical for delineating the description. The LLM approach involved prompting the Gemini Flash 2.0 model to extract the descriptions using predefined instructions. Although the baseline identified descriptions in only 40.5% of verdicts, both methods significantly outperformed it, achieving 97% with advanced regular expressions and 98.75% with LLMs, and 99.5% when combined. Evaluation by law students showed that both advanced methods matched human annotations in about 90% of cases, compared to just 34.5% for the baseline. LLMs fully matched human-labeled descriptions in 91.75% of instances, and a combination of advanced regular expressions with LLMs reached 92%.
AFP developing AI tool to decode gen Z slang amid warning about 'crimefluencers' hunting girls
Federal police say they have identified 59 alleged offenders as being in these online networks and have made an unspecified number of arrests. Federal police say they have identified 59 alleged offenders as being in these online networks and have made an unspecified number of arrests. Australian federal police will develop an AI tool to decode gen Z and Alpha slang and emojis in an effort to crackdown on sadistic online exploitation and "crimefluencers". The AFP commissioner, Krissy Barrett, used a speech at the National Press Club on Wednesday to warn of the rise of online crime networks of young boys and men who are targeting vulnerable teen and preteen girls. The newly appointed chief outlined how the perpetrators, who are overwhelmingly from English-speaking backgrounds, were grooming victims and then forcing them to "perform serious acts of violence on themselves, their siblings, others or their pets".
- South America > Colombia (0.15)
- North America > United States (0.15)
- Oceania > New Zealand (0.05)
- (4 more...)
Soppia: A Structured Prompting Framework for the Proportional Assessment of Non-Pecuniary Damages in Personal Injury Cases
Applying complex legal rules characterized by multiple, heterogeneously weighted criteria presents a fundamental challenge in judicial decision-making, often hindering the consistent realization of legislative intent. This challenge is particularly evident in the quantification of non-pecuniary damages in personal injury cases. This paper introduces Soppia, a structured prompting framework designed to assist legal professionals in navigating this complexity. By leveraging advanced AI, the system ensures a comprehensive and balanced analysis of all stipulated criteria, fulfilling the legislator's intent that compensation be determined through a holistic assessment of each case. Using the twelve criteria for non-pecuniary damages established in the Brazilian CLT (Art. 223-G) as a case study, we demonstrate how Soppia (System for Ordered Proportional and Pondered Intelligent Assessment) operationalizes nuanced legal commands into a practical, replicable, and transparent methodology. The framework enhances consistency and predictability while providing a versatile and explainable tool adaptable across multi-criteria legal contexts, bridging normative interpretation and computational reasoning toward auditable legal AI.
- South America > Brazil > Rio Grande do Sul > Porto Alegre (0.04)
- South America > Brazil > Federal District > Brasília (0.04)
- Law > Torts Law (0.71)
- Law > Litigation (0.46)
- Banking & Finance > Economy (0.46)
AI Generated Child Sexual Abuse Material -- What's the Harm?
Ciardha, Caoilte Ó, Buckley, John, Portnoff, Rebecca S.
The development of generative artificial intelligence (AI) tools capable of producing wholly or partially synthetic child sexual abuse material (AI CSAM) presents profound challenges for child protection, law enforcement, and societal responses to child exploitation. While some argue that the harmfulness of AI CSAM differs fundamentally from other CSAM due to a perceived absence of direct victimization, this perspective fails to account for the range of risks associated with its production and consumption. AI has been implicated in the creation of synthetic CSAM of children who have not previously been abused, the revictimization of known survivors of abuse, the facilitation of grooming, coercion and sexual extortion, and the normalization of child sexual exploitation. Additionally, AI CSAM may serve as a new or enhanced pathway into offending by lowering barriers to engagement, desensitizing users to progressively extreme content, and undermining protective factors for individuals with a sexual interest in children. This paper provides a primer on some key technologies, critically examines the harms associated with AI CSAM, and cautions against claims that it may function as a harm reduction tool, emphasizing how some appeals to harmlessness obscure its real risks and may contribute to inertia in ecosystem responses.
- South America > Brazil (0.04)
- North America > United States > Florida > Orange County (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- (4 more...)
- Overview (1.00)
- Research Report > New Finding (0.46)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.90)
- North America > United States > New York > Richmond County > New York City (0.14)
- North America > United States > New York > Queens County > New York City (0.14)
- North America > United States > New York > New York County > New York City (0.14)
- (23 more...)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
Tech firms suggested placing trackers under offenders' skin at meeting with justice secretary
Tracking devices inserted under offenders' skin, robots assigned to contain prisoners and driverless vehicles used to transport them were among the measures proposed by technology companies to ministers who are gathering ideas to tackle the crisis in the UK justice system. The proposals were made at a meeting of more than two dozen tech companies in London last month, chaired by the justice secretary, Shabana Mahmood, minutes seen by the Guardian show. Amid an acute shortage of prison places and probation officers under severe strain, ministers told the companies they wanted ideas for using wearable technologies, behaviour monitoring and geolocation to create a "prison outside of prison". Those present included representatives of Google, Amazon, Microsoft and Palantir, which works closely with the US military and has contracts with the NHS. IBM and the private prison operator Serco also attended alongside tagging and biometric companies, according to a response to a freedom of information request.
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- Information Technology (1.00)
- Government > Regional Government > Europe Government > United Kingdom Government (1.00)
Are You Human? An Adversarial Benchmark to Expose LLMs
Gressel, Gilad, Pankajakshan, Rahul, Mirsky, Yisroel
Large Language Models (LLMs) have demonstrated an alarming ability to impersonate humans in conversation, raising concerns about their potential misuse in scams and deception. Humans have a right to know if they are conversing to an LLM. We evaluate text-based prompts designed as challenges to expose LLM imposters in real-time. To this end we compile and release an open-source benchmark dataset that includes 'implicit challenges' that exploit an LLM's instruction-following mechanism to cause role deviation, and 'exlicit challenges' that test an LLM's ability to perform simple tasks typically easy for humans but difficult for LLMs. Our evaluation of 9 leading models from the LMSYS leaderboard revealed that explicit challenges successfully detected LLMs in 78.4% of cases, while implicit challenges were effective in 22.9% of instances. User studies validate the real-world applicability of our methods, with humans outperforming LLMs on explicit challenges (78% vs 22% success rate). Our framework unexpectedly revealed that many study participants were using LLMs to complete tasks, demonstrating its effectiveness in detecting both AI impostors and human misuse of AI tools. This work addresses the critical need for reliable, real-time LLM detection methods in high-stakes conversations.
- Information Technology > Security & Privacy (1.00)
- Government (0.69)