delegation
- North America > United States > Colorado > Boulder County > Boulder (0.04)
- North America > United States > Ohio (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- (2 more...)
- Asia > Middle East > Jordan (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- (2 more...)
- Information Technology > Data Science (0.92)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.67)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis (0.45)
'Fallout' Producer Jonathan Nolan on AI: 'We're in Such a Frothy Moment'
The showrunner thinks AI will be good for burgeoning filmmakers, but not for Hollywood blockbusters. Jonathan Nolan saw this coming. As a screenwriter, he's worked on several of his brother Christopher Nolan's films, from to the movies. Partnered with his wife Lisa Joy, he created HBO's and executive produced Amazon Prime's . But before that, he cut his TV teeth creating, a CBS procedural about a solitary tech billionaire who creates a piece of surveillance software aimed at stopping crime before it happens. It was fiction, but it's hard not to feel its prescience. With, now in its second season, Nolan also has his sights on the future. Based on the video game series of the same name, it's about a postapocalyptic America where everyone must survive in any way they can. So, what does Nolan see happening in the coming decades? For one, he doesn't think AI is going to replace human filmmakers. In fact, he thinks it could help aspiring directors get a foot in the door. He'd also like to see the demise of (most) social media--but understands that may never happen. For this week's episode of The Big Interview podcast, I asked Nolan about all of those things and more. Below you'll find his thoughts on writing Batman movies, classic cars, and what he'd actually bring to his own doomsday bunker. Thank you for having me. I'm delighted to have you here in person in New York. I'm from Canada so my barometer is a little off, but I tend to think of New York as wimpy cold. No, no, this is real. The older I get the weaker and more frail. So I can't tolerate [it]. I've been in LA for 25 years.
- North America > United States > California (0.47)
- North America > United States > New York (0.45)
- Asia > Middle East > Republic of Türkiye > Batman Province > Batman (0.24)
- Transportation > Passenger (1.00)
- Transportation > Ground > Road (1.00)
- Media > Film (1.00)
- (2 more...)
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Communications > Social Media (0.88)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.46)
Ask not what AI can do, but what AI should do: Towards a framework of task delegability
While artificial intelligence (AI) holds promise for addressing societal challenges, issues of exactly which tasks to automate and to what extent to do so remain understudied. We approach this problem of task delegability from a human-centered perspective by developing a framework on human perception of task delegation to AI. We consider four high-level factors that can contribute to a delegation decision: motivation, difficulty, risk, and trust. To obtain an empirical understanding of human preferences in different tasks, we build a dataset of 100 tasks from academic papers, popular media portrayal of AI, and everyday life, and administer a survey based on our proposed framework. We find little preference for full AI control and a strong preference for machine-in-the-loop designs, in which humans play the leading role. Among the four factors, trust is the most correlated with human preferences of optimal human-machine delegation. This framework represents a first step towards characterizing human preferences of AI automation across tasks. We hope this work encourages future efforts towards understanding such individual attitudes; our goal is to inform the public and the AI research community rather than dictating any direction in technology development.
Delegated Classification
When machine learning is outsourced to a rational agent, conflicts of interest might arise and severely impact predictive performance. In this work, we propose a theoretical framework for incentive-aware delegation of machine learning tasks. We model delegation as a principal-agent game, in which accurate learning can be incentivized by the principal using performance-based contracts. Adapting the economic theory of contract design to this setting, we define budget-optimal contracts and prove they take a simple threshold form under reasonable assumptions. In the binary-action case, the optimality of such contracts is shown to be equivalent to the classic Neyman-Pearson lemma, establishing a formal connection between contract design and statistical hypothesis testing. Empirically, we demonstrate that budget-optimal contracts can be constructed using small-scale data, leveraging recent advances in the study of learning curves and scaling laws. Performance and economic outcomes are evaluated using synthetic and real-world classification tasks.
- Asia > Middle East > Jordan (0.04)
- North America > United States > New York (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- (3 more...)
OpenID Connect for Agents (OIDC-A) 1.0: A Standard Extension for LLM-Based Agent Identity and Authorization
Nagabhushanaradhya, Subramanya
OpenID Connect for Agents (OIDC-A) 1.0 is an extension to OpenID Connect Core 1.0 that provides a comprehensive framework for representing, authenticating, and authorizing LLM-based agents within the OAuth 2.0 ecosystem. As autonomous AI agents become increasingly prevalent in digital systems, there is a critical need for standardized protocols to establish agent identity, verify agent attestation, represent delegation chains, and enable fine-grained authorization based on agent attributes. This specification defines standard claims, endpoints, and protocols that address these requirements while maintaining compatibility with existing OAuth 2.0 and OpenID Connect infrastructure. The proposed framework introduces mechanisms for agent identity representation, delegation chain validation, attestation verification, and capability-based authorization, providing a foundation for secure and trustworthy agent-to-service interactions in modern distributed systems.
AVEC: Bootstrapping Privacy for Local LLMs
This position paper presents A VEC (Adaptive Verifiable Edge Control), a framework for bootstrapping privacy for local language models by enforcing privacy at the edge with explicit verifiability for delegated queries. A VEC introduces an adaptive budgeting algorithm that allocates per-query differential privacy parameters based on sensitivity, local confidence, and historical usage, and uses verifiable transformation with on-device integrity checks. We formalize guarantees using R enyi differential privacy with odometer-based accounting, and establish utility ceilings, delegation-leakage bounds, and impossibility results for deterministic gating and hash-only certification. Our evaluation is simulation-based by design to study mechanism behavior and accounting; we do not claim deployment readiness or task-level utility with live LLMs. The contribution is a conceptual architecture and theoretical foundation that chart a pathway for empirical follow-up on privately bootstrapping local LLMs.
- Information Technology > Security & Privacy (1.00)
- Law (0.68)
Ask not what AI can do, but what AI should do: Towards a framework of task delegability
While artificial intelligence (AI) holds promise for addressing societal challenges, issues of exactly which tasks to automate and to what extent to do so remain understudied. We approach this problem of task delegability from a human-centered perspective by developing a framework on human perception of task delegation to AI. We consider four high-level factors that can contribute to a delegation decision: motivation, difficulty, risk, and trust. To obtain an empirical understanding of human preferences in different tasks, we build a dataset of 100 tasks from academic papers, popular media portrayal of AI, and everyday life, and administer a survey based on our proposed framework. We find little preference for full AI control and a strong preference for machine-in-the-loop designs, in which humans play the leading role. Among the four factors, trust is the most correlated with human preferences of optimal human-machine delegation. This framework represents a first step towards characterizing human preferences of AI automation across tasks. We hope this work encourages future efforts towards understanding such individual attitudes; our goal is to inform the public and the AI research community rather than dictating any direction in technology development.
- North America > United States > Colorado > Boulder County > Boulder (0.04)
- North America > United States > Ohio (0.04)
- North America > Canada (0.04)
- (2 more...)
Not someone, but something: Rethinking trust in the age of medical AI
As artificial intelligence (AI) becomes embedded in healthcare, trust in medical decision - making is changing fast. Nowhere is this shift more visible than in radiology, where AI tools are increasingly embedded across the imaging workflow -- from scheduling an d acquisition to interpretation, reporting, and communication with referrers and patients. This opinion paper argues that trust in AI isn't a simple transfer from humans to machines -- it's a dynamic, evolving relationship that must be built and maintained. R ather than debating whether AI belongs in medicine, it asks: what kind of trust must AI earn, and how? Drawing from philosophy, bioethics, and system design, it explores the key differences between human trust and machine reliability -- emphasizing transparen cy, accountability, and alignment with the values of good care. It argues that trust in AI shouldn't be built on mimicking empathy or intuition, but on thoughtful design, responsible deployment, and clear moral responsibility. The goal is a balanced view -- o ne that avoids blind optimism and reflexive fear. Trust in AI must be treated not as a given, but as something to be earned over time.
- North America > Canada (0.04)
- Europe > Middle East (0.04)
- Asia > Middle East (0.04)
- (2 more...)
- Information Technology > Security & Privacy (0.47)
- Health & Medicine > Nuclear Medicine (0.38)
- Health & Medicine > Diagnostic Medicine > Imaging (0.38)
- Health & Medicine > Government Relations & Public Policy (0.34)