obligation
Words Without Consequence
What does it mean to have speech without a speaker? For the first time, speech has been decoupled from consequence. We now live alongside AI systems that converse knowledgeably and persuasively--deploying claims about the world, explanations, advice, encouragement, apologies, and promises--while bearing no vulnerability for what they say. Millions of people already rely on chatbots powered by large language models, and have integrated these synthetic interlocutors into their personal and professional lives. An LLM's words shape our beliefs, decisions, and actions, yet no speaker stands behind them. This dynamic is already familiar in everyday use. A chatbot gets something wrong. When corrected, it apologizes and changes its answer.
Executable Governance for AI: Translating Policies into Rules Using LLMs
Datla, Gautam Varma, Vurity, Anudeep, Dash, Tejaswani, Ahmad, Tazeem, Adnan, Mohd, Rafi, Saima
AI policy guidance is predominantly written as prose, which practitioners must first convert into executable rules before frameworks can evaluate or enforce them. This manual step is slow, error-prone, difficult to scale, and often delays the use of safeguards in real-world deployments. To address this gap, we present Policy-to-Tests (P2T), a framework that converts natural-language policy documents into normalized, machine-readable rules. The framework comprises a pipeline and a compact domain-specific language (DSL) that encodes hazards, scope, conditions, exceptions, and required evidence, yielding a canonical representation of extracted rules. To test the framework beyond a single policy, we apply it across general frameworks, sector guidance, and enterprise standards, extracting obligation-bearing clauses and converting them into executable rules. These AI-generated rules closely match strong human baselines on span-level and rule-level metrics, with robust inter-annotator agreement on the gold set. To evaluate downstream behavioral and safety impact, we add HIPAA-derived safeguards to a generative agent and compare it with an otherwise identical agent without guardrails. An LLM-based judge, aligned with gold-standard criteria, measures violation rates and robustness to obfuscated and compositional prompts. Detailed results are provided in the appendix. We release the codebase, DSL, prompts, and rule sets as open-source resources to enable reproducible evaluation.
- Europe > Portugal > Aveiro > Aveiro (0.04)
- Oceania > Australia > Queensland (0.04)
- North America > United States > Utah > Salt Lake County > Salt Lake City (0.04)
- (4 more...)
- Health & Medicine (0.88)
- Government (0.69)
- Information Technology > Security & Privacy (0.47)
- Law > Statutes (0.47)
How can you tell if your new favourite artist is a real person?
How can you tell if your new favourite artist is a real person? There's a new song doing the rounds, and in the immortal words of Kylie Minogue, you just can't get it out of your head. But what if it was created by a robot, or the artist themself is a product of artificial intelligence (AI)? Do streaming sites have an obligation to label music as AI-generated? And does it even matter, if you like what you hear?
- South America (0.14)
- North America > Central America (0.14)
- Europe > United Kingdom > Scotland (0.05)
- (15 more...)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
Policy Cards: Machine-Readable Runtime Governance for Autonomous AI Agents
Policy Cards are introduced as a machine-readable, deployment-layer standard for expressing operational, regulatory, and ethical constraints for AI agents. The Policy Card sits with the agent and enables it to follow required constraints at runtime. It tells the agent what it must and must not do. As such, it becomes an integral part of the deployed agent. Policy Cards extend existing transparency artifacts such as Model, Data, and System Cards by defining a normative layer that encodes allow/deny rules, obligations, evidentiary requirements, and crosswalk mappings to assurance frameworks including NIST AI RMF, ISO/IEC 42001, and the EU AI Act. Each Policy Card can be validated automatically, version-controlled, and linked to runtime enforcement or continuous-audit pipelines. The framework enables verifiable compliance for autonomous agents, forming a foundation for distributed assurance in multi-agent ecosystems. Policy Cards provide a practical mechanism for integrating high-level governance with hands-on engineering practice and enabling accountable autonomy at scale.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Maryland > Montgomery County > Gaithersburg (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- Government (1.00)
The Verification-Value Paradox: A Normative Critique of Gen AI in Legal Practice
It is often claimed that machine learning-based generative AI products will drastically streamline and reduce the cost of legal practice. This enthusiasm assumes lawyers can effectively manage AI's risks. Cases in Australia and elsewhere in which lawyers have been reprimanded for submitting inaccurate AI-generated content to courts suggest this paradigm must be revisited. This paper argues that a new paradigm is needed to evaluate AI use in practice, given (a) AI's disconnection from reality and its lack of transparency, and (b) lawyers' paramount duties like honesty, integrity, and not to mislead the court. It presents an alternative model of AI use in practice that more holistically reflects these features (the verification-value paradox). That paradox suggests increases in efficiency from AI use in legal practice will be met by a correspondingly greater imperative to manually verify any outputs of that use, rendering the net value of AI use often negligible to lawyers. The paper then sets out the paradox's implications for legal practice and legal education, including for AI use but also the values that the paradox suggests should undergird legal practice: fidelity to the truth and civic responsibility.
- North America > United States > California (0.14)
- North America > Canada > Ontario > Toronto (0.14)
- Oceania > Australia > New South Wales (0.04)
- (18 more...)
- Research Report (1.00)
- Overview (1.00)
- Law > Litigation (1.00)
- Law > Government & the Courts (0.93)
- Education > Educational Setting > Higher Education (0.69)
- (2 more...)
Subject Roles in the EU AI Act: Mapping and Regulatory Implications
The European Union's Artificial Intelligence Act (Regulation (EU) 2024/1689) establishes the world's first comprehensive regulatory framework for AI systems through a sophisticated ecosystem of interconnected subjects defined in Article 3. This paper provides a structured examination of the six main categories of actors - providers, deployers, authorized representatives, importers, distributors, and product manufacturers - collectively referred to as "operators" within the regulation. Through examination of these Article 3 definitions and their elaboration across the regulation's 113 articles, 180 recitals, and 13 annexes, we map the complete governance structure and analyze how the AI Act regulates these subjects. Our analysis reveals critical transformation mechanisms whereby subjects can assume different roles under specific conditions, particularly through Article 25 provisions ensuring accountability follows control. We identify how obligations cascade through the supply chain via mandatory information flows and cooperation requirements, creating a distributed yet coordinated governance system. The findings demonstrate how the regulation balances innovation with the protection of fundamental rights through risk-based obligations that scale with the capabilities and deployment contexts of AI systems, providing essential guidance for stakeholders implementing the AI Act's requirements.
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > Italy (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (0.84)
The Sandbox Configurator: A Framework to Support Technical Assessment in AI Regulatory Sandboxes
Buscemi, Alessio, Simonetto, Thibault, Pagani, Daniele, Castignani, German, Cordy, Maxime, Cabot, Jordi
The systematic assessment of AI systems is increasingly vital as these technologies enter high-stakes domains. To address this, the EU's Artificial Intelligence Act introduces AI Regulatory Sandboxes (AIRS): supervised environments where AI systems can be tested under the oversight of Competent Authorities (CAs), balancing innovation with compliance, particularly for startups and SMEs. Yet significant challenges remain: assessment methods are fragmented, tests lack standardisation, and feedback loops between developers and regulators are weak. To bridge these gaps, we propose the Sandbox Configurator, a modular open-source framework that enables users to select domain-relevant tests from a shared library and generate customised sandbox environments with integrated dashboards. Its plug-in architecture aims to support both open and proprietary modules, fostering a shared ecosystem of interoperable AI assessment services. The framework aims to address multiple stakeholders: CAs gain structured workflows for applying legal obligations; technical experts can integrate robust evaluation methods; and AI providers access a transparent pathway to compliance. By promoting cross-border collaboration and standardisation, the Sandbox Configurator's goal is to support a scalable and innovation-friendly European infrastructure for trustworthy AI governance.
- North America > United States (1.00)
- Oceania > Australia (0.04)
- Europe > Sweden (0.04)
- (2 more...)
Automated Boilerplate: Prevalence and Quality of Contract Generators in the Context of Swiss Privacy Policies
Nenadic, Luka, Rodriguez, David
It has become increasingly challenging for firms to comply with a plethora of novel digital regulations. This is especially true for smaller businesses that often lack both the resources and know-how to draft complex legal documents. Instead of seeking costly legal advice from attorneys, firms may turn to cheaper alternative legal service providers such as automated contract generators. While these services have a long-standing presence, there is little empirical evidence on their prevalence and output quality. We address this gap in the context of a 2023 Swiss privacy law revision. To enable a systematic evaluation, we create and annotate a multilingual benchmark dataset that captures key compliance obligations under Swiss and EU privacy law. Using this dataset, we validate a novel GPT-5-based method for large-scale compliance assessment of privacy policies, allowing us to measure the impact of the revision. We observe compliance increases indicating an effect of the revision. Generators, explicitly referenced by 18% of local websites, are associated with substantially higher levels of compliance, with increases of up to 15 percentage points compared to privacy policies without generator use. These findings contribute to three debates: the potential of LLMs for cross-lingual legal analysis, the Brussels Effect of EU regulations, and, crucially, the role of automated tools in improving compliance and contractual quality.
- Europe > Switzerland > Zürich > Zürich (0.14)
- Europe > Austria > Vienna (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- (24 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (0.93)
- Government > Regional Government > North America Government > United States Government (0.67)
Deontic Argumentation
Governatori, Guido, Rotolo, Antonino
We address the issue of defining a semantics for deontic argumentation that supports weak permission. Some recent results show that grounded semantics do not support weak permission when there is a conflict between two obligations. We provide a definition of Deontic Argumentation Theory that accounts for weak permission, and we recall the result about grounded semantics. Then, we propose a new semantics that supports weak permission.
- Europe > Netherlands > South Holland > Dordrecht (0.04)
- Europe > Italy > Emilia-Romagna > Metropolitan City of Bologna > Bologna (0.04)
- Oceania > Australia > Queensland (0.04)
- (2 more...)
Communication Bias in Large Language Models: A Regulatory Perspective
Kuenzler, Adrian, Schmid, Stefan
Large language models (LLMs) are a prominent subset of AI, built on advanced neural network architectures that can generate new data, including text, images, and audio. LLMs utilize various technologies to identify patterns in a given set of training data, without requiring explicit instructions about what to look for [ 12, 35 ] . LLMs typically assume that the training data follows a probability distribution, and once they have identified existing patterns, they can generate new instances that are similar to the original data. By drawing from and combining training data, LLMs can create new content that tran scends the initial dataset [1 7 ].
- Asia > China > Hong Kong (0.04)
- Europe > Germany > Berlin (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (4 more...)
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (0.93)
- (2 more...)