obligation
Words Without Consequence
What does it mean to have speech without a speaker? For the first time, speech has been decoupled from consequence. We now live alongside AI systems that converse knowledgeably and persuasively--deploying claims about the world, explanations, advice, encouragement, apologies, and promises--while bearing no vulnerability for what they say. Millions of people already rely on chatbots powered by large language models, and have integrated these synthetic interlocutors into their personal and professional lives. An LLM's words shape our beliefs, decisions, and actions, yet no speaker stands behind them. This dynamic is already familiar in everyday use. A chatbot gets something wrong. When corrected, it apologizes and changes its answer.
2433fec2144ccf5fea1c9c5ebdbc3924-Supplemental-Conference.pdf
For each word, we use WordNet [7] to find its synonyms and build a list of word sets. Inaddition, toavoidreplacement clash, wedonotallowanyword to appear in more than word set. Eventually, top 50 semantically matching pairs are retained for CATER. Since the training data of the victim model is unknown to the malicious users, we randomly select 5M sentences from common crawl data as thebenigncorpus. Numbers in parentheses are resultsofcleandata.
French authorities raid X offices, summon Musk in cybercrime probe
French police have raided the Paris offices of X and summoned its owner, Elon Musk, to appear at a hearing, amid an ongoing investigation into the social media giant, the prosecution has said. The search on Tuesday related to an investigation launched in January last year into allegations of biased algorithms and fraudulent data extraction by the platform, the Paris Prosecutor's Office said in a post on X. These included possessing and spreading pornographic images of minors, defamation of personal image related to the creation of sexually explicit "deepfakes", Holocaust denial, and manipulation of an automated data processing system. Prosecutors have also filed requests for "voluntary interviews" of Musk - the billionaire CEO of X's parent company xAI, as well as SpaceX and Tesla - and the platform's former CEO, Linda Yaccarino, on April 20. Other staff at X - known as Twitter before Musk's 2022 purchase of the platform - have been summoned to appear the same week as witnesses, the office said.
- Law Enforcement & Public Safety (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government > France Government (0.93)
EU launches probe into Grok AI feature creating deepfakes of women, minors
The European Commission has launched an investigation into Elon Musk's AI chatbot, Grok, regarding the creation of sexually explicit fake images of women and minors. The commission announced on Monday that its investigation would examine whether the AI tool used on X has met its legal obligations under the European Union's Digital Services Act (DSA), which requires social media companies to address illegal and harmful online content. In a statement to the AFP news agency, European Commission President Ursula von der Leyen said Europe will not "tolerate unthinkable behaviour, such as digital undressing of women and children". "It is simple - we will not hand over consent and child protection to tech companies to violate and monetise. The harm caused by illegal images is very real," she added.
- Europe > United Kingdom (0.17)
- South America (0.06)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.06)
- (8 more...)
- Media (1.00)
- Law (1.00)
- Government > Regional Government > Europe Government (0.94)
- Information Technology > Security & Privacy (0.77)
- Information Technology > Communications > Social Media (0.58)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.58)
- Information Technology > Artificial Intelligence > Vision (0.57)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.57)
AI's growing thirst for water is becoming a public health risk
AI's growing thirst for water is becoming a public health risk "Bubble" is probably the word most associated with "AI" right now, though we are slowly understanding that it is not just an economic time bomb; it also carries significant public health risks. Beyond the release of pollutants, the massive need for clean water by AI data centres can reduce sanitation and exacerbate gastrointestinal illness in nearby communities, placing additional strain on local health infrastructure. AI's energy consumption is massive and increasingly water-dependent Generative AI is artificial intelligence that is able to generate new text, photos, code and more, and it has already infiltrated the lives of most people around the globe. ChatGPT alone is reported to receive around one billion queries in a single day, pointing to huge demand at the individual level. This, however, is only the tip of the iceberg.
- Europe > United Kingdom (0.15)
- South America (0.05)
- Pacific Ocean > North Pacific Ocean > San Francisco Bay (0.05)
- (13 more...)
- Health & Medicine > Public Health (0.83)
- Health & Medicine > Consumer Health (0.81)
- Information Technology > Services (0.81)
- Water & Waste Management > Water Management > Water Supplies & Services (0.70)
Executable Governance for AI: Translating Policies into Rules Using LLMs
Datla, Gautam Varma, Vurity, Anudeep, Dash, Tejaswani, Ahmad, Tazeem, Adnan, Mohd, Rafi, Saima
AI policy guidance is predominantly written as prose, which practitioners must first convert into executable rules before frameworks can evaluate or enforce them. This manual step is slow, error-prone, difficult to scale, and often delays the use of safeguards in real-world deployments. To address this gap, we present Policy-to-Tests (P2T), a framework that converts natural-language policy documents into normalized, machine-readable rules. The framework comprises a pipeline and a compact domain-specific language (DSL) that encodes hazards, scope, conditions, exceptions, and required evidence, yielding a canonical representation of extracted rules. To test the framework beyond a single policy, we apply it across general frameworks, sector guidance, and enterprise standards, extracting obligation-bearing clauses and converting them into executable rules. These AI-generated rules closely match strong human baselines on span-level and rule-level metrics, with robust inter-annotator agreement on the gold set. To evaluate downstream behavioral and safety impact, we add HIPAA-derived safeguards to a generative agent and compare it with an otherwise identical agent without guardrails. An LLM-based judge, aligned with gold-standard criteria, measures violation rates and robustness to obfuscated and compositional prompts. Detailed results are provided in the appendix. We release the codebase, DSL, prompts, and rule sets as open-source resources to enable reproducible evaluation.
- Europe (0.94)
- North America > United States (0.94)
- Health & Medicine (0.88)
- Government (0.69)
- Information Technology > Security & Privacy (0.47)
- Law > Statutes (0.47)
How can you tell if your new favourite artist is a real person?
How can you tell if your new favourite artist is a real person? There's a new song doing the rounds, and in the immortal words of Kylie Minogue, you just can't get it out of your head. But what if it was created by a robot, or the artist themself is a product of artificial intelligence (AI)? Do streaming sites have an obligation to label music as AI-generated? And does it even matter, if you like what you hear?
- South America (0.14)
- North America > Central America (0.14)
- Europe > United Kingdom > Scotland (0.05)
- (15 more...)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
Normative Reasoning in Large Language Models: A Comparative Benchmark from Logical and Modal Perspectives
Ozeki, Kentaro, Ando, Risako, Morishita, Takanobu, Abe, Hirohiko, Mineshima, Koji, Okada, Mitsuhiro
Normative reasoning is a type of reasoning that involves normative or deontic modality, such as obligation and permission. While large language models (LLMs) have demonstrated remarkable performance across various reasoning tasks, their ability to handle normative reasoning remains underexplored. In this paper, we systematically evaluate LLMs' reasoning capabilities in the normative domain from both logical and modal perspectives. Specifically, to assess how well LLMs reason with normative modals, we make a comparison between their reasoning with normative modals and their reasoning with epistemic modals, which share a common formal structure. To this end, we introduce a new dataset covering a wide range of formal patterns of reasoning in both normative and epistemic domains, while also incorporating non-formal cognitive factors that influence human reasoning. Our results indicate that, although LLMs generally adhere to valid reasoning patterns, they exhibit notable inconsistencies in specific types of normative reasoning and display cognitive biases similar to those observed in psychological studies of human reasoning. These findings highlight challenges in achieving logical consistency in LLMs' normative reasoning and provide insights for enhancing their reliability. All data and code are released publicly at https://github.com/kmineshima/NeuBAROCO.
- North America (0.67)
- Asia > Japan (0.28)
Policy Cards: Machine-Readable Runtime Governance for Autonomous AI Agents
Policy Cards are introduced as a machine-readable, deployment-layer standard for expressing operational, regulatory, and ethical constraints for AI agents. The Policy Card sits with the agent and enables it to follow required constraints at runtime. It tells the agent what it must and must not do. As such, it becomes an integral part of the deployed agent. Policy Cards extend existing transparency artifacts such as Model, Data, and System Cards by defining a normative layer that encodes allow/deny rules, obligations, evidentiary requirements, and crosswalk mappings to assurance frameworks including NIST AI RMF, ISO/IEC 42001, and the EU AI Act. Each Policy Card can be validated automatically, version-controlled, and linked to runtime enforcement or continuous-audit pipelines. The framework enables verifiable compliance for autonomous agents, forming a foundation for distributed assurance in multi-agent ecosystems. Policy Cards provide a practical mechanism for integrating high-level governance with hands-on engineering practice and enabling accountable autonomy at scale.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Maryland > Montgomery County > Gaithersburg (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- Government (1.00)
The Verification-Value Paradox: A Normative Critique of Gen AI in Legal Practice
It is often claimed that machine learning-based generative AI products will drastically streamline and reduce the cost of legal practice. This enthusiasm assumes lawyers can effectively manage AI's risks. Cases in Australia and elsewhere in which lawyers have been reprimanded for submitting inaccurate AI-generated content to courts suggest this paradigm must be revisited. This paper argues that a new paradigm is needed to evaluate AI use in practice, given (a) AI's disconnection from reality and its lack of transparency, and (b) lawyers' paramount duties like honesty, integrity, and not to mislead the court. It presents an alternative model of AI use in practice that more holistically reflects these features (the verification-value paradox). That paradox suggests increases in efficiency from AI use in legal practice will be met by a correspondingly greater imperative to manually verify any outputs of that use, rendering the net value of AI use often negligible to lawyers. The paper then sets out the paradox's implications for legal practice and legal education, including for AI use but also the values that the paradox suggests should undergird legal practice: fidelity to the truth and civic responsibility.
- North America > United States > California (0.14)
- North America > Canada > Ontario > Toronto (0.14)
- Oceania > Australia > New South Wales (0.04)
- (18 more...)
- Research Report (1.00)
- Overview (1.00)
- Law > Litigation (1.00)
- Law > Government & the Courts (0.93)
- Education > Educational Setting > Higher Education (0.69)
- (2 more...)