Goto

Collaborating Authors

 authorization


Secure Autonomous Agent Payments: Verifying Authenticity and Intent in a Trustless Environment

Acharya, Vivek

arXiv.org Artificial Intelligence

Artificial intelligence (AI) agents are increasingly capable of initiating financial transactions on behalf of users or other agents. This evolution introduces a fundamental challenge: verifying both the authenticity of an autonomous agent and the true intent behind its transactions in a decentralized, trustless environment. Traditional payment systems assume human authorization, but autonomous, agent-led payments remove that safeguard. This paper presents a blockchain-based framework that cryptographically authenticates and verifies the intent of every AI-initiated transaction. The proposed system leverages decentralized identity (DID) standards and verifiable credentials to establish agent identities, on-chain intent proofs to record user authorization, and zero-knowledge proofs (ZKPs) to preserve privacy while ensuring policy compliance. Additionally, secure execution environments (TEE-based attestations) guarantee the integrity of agent reasoning and execution. The hybrid on-chain/off-chain architecture provides an immutable audit trail linking user intent to payment outcome. Through qualitative analysis, the framework demonstrates strong resistance to impersonation, unauthorized transactions, and misalignment of intent. This work lays the foundation for secure, auditable, and intent-aware autonomous economic agents, enabling a future of verifiable trust and accountability in AI-driven financial ecosystems.


Identity Management for Agentic AI: The new frontier of authorization, authentication, and security for an AI agent world

South, Tobin, Nagabhushanaradhya, Subramanya, Dissanayaka, Ayesha, Cecchetti, Sarah, Fletcher, George, Lu, Victor, Pietropaolo, Aldo, Saxe, Dean H., Lombardo, Jeff, Shivalingaiah, Abhishek Maligehalli, Bounev, Stan, Keisner, Alex, Kesselman, Andor, Proser, Zack, Fahs, Ginny, Bunyea, Andrew, Moskowitz, Ben, Tulshibagwale, Atul, Greenwood, Dazza, Pei, Jiaxin, Pentland, Alex

arXiv.org Artificial Intelligence

The rapid rise of AI agents presents urgent challenges in authentication, authorization, and identity management. Current agent-centric protocols (like MCP) highlight the demand for clarified best practices in authentication and authorization. Looking ahead, ambitions for highly autonomous agents raise complex long-term questions regarding scalable access control, agent-centric identities, AI workload differentiation, and delegated authority. This OpenID Foundation whitepaper is for stakeholders at the intersection of AI agents and access management. It outlines the resources already available for securing today's agents and presents a strategic agenda to address the foundational authentication, authorization, and identity problems pivotal for tomorrow's widespread autonomous systems.


DistilLock: Safeguarding LLMs from Unauthorized Knowledge Distillation on the Edge

Mohanty, Asmita, Kang, Gezheng, Gao, Lei, Annavaram, Murali

arXiv.org Artificial Intelligence

Large Language Models (LLMs) have demonstrated strong performance across diverse tasks, but fine-tuning them typically relies on cloud-based, centralized infrastructures. This requires data owners to upload potentially sensitive data to external servers, raising serious privacy concerns. An alternative approach is to fine-tune LLMs directly on edge devices using local data; however, this introduces a new challenge: the model owner must transfer proprietary models to the edge, which risks intellectual property (IP) leakage. To address this dilemma, we propose DistilLock, a TEE-assisted fine-tuning framework that enables privacy-preserving knowledge distillation on the edge. In DistilLock, a proprietary foundation model is executed within a trusted execution environment (TEE) enclave on the data owner's device, acting as a secure black-box teacher. This setup preserves both data privacy and model IP by preventing direct access to model internals. Furthermore, DistilLock employs a model obfuscation mechanism to offload obfuscated weights to untrusted accelerators for efficient knowledge distillation without compromising security. We demonstrate that DistilLock prevents unauthorized knowledge distillation processes and model-stealing attacks while maintaining high computational efficiency, but offering a secure and practical solution for edge-based LLM personalization.


FDA approves first AI tool to predict breast cancer risk

FOX News

Senior medical analyst Dr. Marc Siegel discusses advancements in artificial intelligence aimed at predicting an individual's future risk of breast cancer and the increased health risks from cannabis as users age. The U.S. Food and Drug Administration (FDA) has approved the first artificial intelligence (AI) tool to predict breast cancer risk. The authorization was confirmed by digital health tech company Clairity, the developer of Clairity Breast – a novel, image-based prognostic platform designed to predict five-year breast cancer risk from a routine screening mammogram. In a press release, Clairity shared its plans to launch the AI platform across health systems through 2025. Most risk assessment models for breast cancer rely heavily on age and family history, according to Clairity.


A Novel Zero-Trust Identity Framework for Agentic AI: Decentralized Authentication and Fine-Grained Access Control

Huang, Ken, Narajala, Vineeth Sai, Yeoh, John, Ross, Jason, Raskar, Ramesh, Harkati, Youssef, Huang, Jerry, Habler, Idan, Hughes, Chris

arXiv.org Artificial Intelligence

Traditional Identity and Access Management (IAM) systems, primarily designed for human users or static machine identities via protocols such as OAuth, OpenID Connect (OIDC), and SAML, prove fundamentally inadequate for the dynamic, interdependent, and often ephemeral nature of AI agents operating at scale within Multi Agent Systems (MAS), a computational system composed of multiple interacting intelligent agents that work collectively. This paper posits the imperative for a novel Agentic AI IAM framework: We deconstruct the limitations of existing protocols when applied to MAS, illustrating with concrete examples why their coarse-grained controls, single-entity focus, and lack of context-awareness falter. We then propose a comprehensive framework built upon rich, verifiable Agent Identities (IDs), leveraging Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs), that encapsulate an agents capabilities, provenance, behavioral scope, and security posture. Our framework includes an Agent Naming Service (ANS) for secure and capability-aware discovery, dynamic fine-grained access control mechanisms, and critically, a unified global session management and policy enforcement layer for real-time control and consistent revocation across heterogeneous agent communication protocols. We also explore how Zero-Knowledge Proofs (ZKPs) enable privacy-preserving attribute disclosure and verifiable policy compliance. We outline the architecture, operational lifecycle, innovative contributions, and security considerations of this new IAM paradigm, aiming to establish the foundational trust, accountability, and security necessary for the burgeoning field of agentic AI and the complex ecosystems they will inhabit.


Authenticated Delegation and Authorized AI Agents

South, Tobin, Marro, Samuele, Hardjono, Thomas, Mahari, Robert, Whitney, Cedric Deslandes, Greenwood, Dazza, Chan, Alan, Pentland, Alex

arXiv.org Artificial Intelligence

The rapid deployment of autonomous AI agents creates urgent challenges around authorization, accountability, and access control in digital spaces. New standards are needed to know whom AI agents act on behalf of and guide their use appropriately, protecting online spaces while unlocking the value of task delegation to autonomous agents. We introduce a novel framework for authenticated, authorized, and auditable delegation of authority to AI agents, where human users can securely delegate and restrict the permissions and scope of agents while maintaining clear chains of accountability. This framework builds on existing identification and access management protocols, extending OAuth 2.0 and OpenID Connect with agent-specific credentials and metadata, maintaining compatibility with established authentication and web infrastructure. Further, we propose a framework for translating flexible, natural language permissions into auditable access control configurations, enabling robust scoping of AI agent capabilities across diverse interaction modalities. Taken together, this practical approach facilitates immediate deployment of AI agents while addressing key security and accountability concerns, working toward ensuring agentic AI systems perform only appropriate actions and providing a tool for digital service providers to enable AI agent interactions without risking harm from scalable interaction.


Rupert Murdoch's Dow Jones and New York Post sue AI firm for 'illegal copying'

The Guardian

"This suit is brought by news publishers who seek redress for Perplexity's brazen scheme to compete for readers while simultaneously freeriding on the valuable content the publishers produce," according to the lawsuit filed in the southern district of New York by the Wall Street Journal parent Dow Jones and the New York Post. Perplexity did not immediately respond to emails from Reuters seeking comment. The AI company is among the leading startups attempting to uproot the search engine market dominated by Alphabet's Google. It assembles information from webpages it deems to be authoritative, then provides a summary directly within Perplexity's own tool. Perplexity uses a variety of large language models (LLMs) to generate its summaries, from OpenAI to Meta's open-source model Llama.


CoreGuard: Safeguarding Foundational Capabilities of LLMs Against Model Stealing in Edge Deployment

Li, Qinfeng, Xie, Yangfan, Du, Tianyu, Shen, Zhiqiang, Qin, Zhenghan, Peng, Hao, Zhao, Xinkui, Zhu, Xianwei, Yin, Jianwei, Zhang, Xuhong

arXiv.org Artificial Intelligence

Proprietary large language models (LLMs) demonstrate exceptional generalization ability across various tasks. Additionally, deploying LLMs on edge devices is trending for efficiency and privacy reasons. However, edge deployment of proprietary LLMs introduces new security threats: attackers who obtain an edge-deployed LLM can easily use it as a base model for various tasks due to its high generalization ability, which we call foundational capability stealing. Unfortunately, existing model protection mechanisms are often task-specific and fail to protect general-purpose LLMs, as they mainly focus on protecting task-related parameters using trusted execution environments (TEEs). Although some recent TEE-based methods are able to protect the overall model parameters in a computation-efficient way, they still suffer from prohibitive communication costs between TEE and CPU/GPU, making it impractical to deploy for edge LLMs. To protect the foundational capabilities of edge LLMs, we propose CoreGuard, a computation- and communication-efficient model protection approach against model stealing on edge devices. The core component of CoreGuard is a lightweight and propagative authorization module residing in TEE. Extensive experiments show that CoreGuard achieves the same security protection as the black-box security guarantees with negligible overhead.


TransLinkGuard: Safeguarding Transformer Models Against Model Stealing in Edge Deployment

Li, Qinfeng, Shen, Zhiqiang, Qin, Zhenghan, Xie, Yangfan, Zhang, Xuhong, Du, Tianyu, Yin, Jianwei

arXiv.org Artificial Intelligence

Proprietary large language models (LLMs) have been widely applied in various scenarios. Additionally, deploying LLMs on edge devices is trending for efficiency and privacy reasons. However, edge deployment of proprietary LLMs introduces new security challenges: edge-deployed models are exposed as white-box accessible to users, enabling adversaries to conduct effective model stealing (MS) attacks. Unfortunately, existing defense mechanisms fail to provide effective protection. Specifically, we identify four critical protection properties that existing methods fail to simultaneously satisfy: (1) maintaining protection after a model is physically copied; (2) authorizing model access at request level; (3) safeguarding runtime reverse engineering; (4) achieving high security with negligible runtime overhead. To address the above issues, we propose TransLinkGuard, a plug-and-play model protection approach against model stealing on edge devices. The core part of TransLinkGuard is a lightweight authorization module residing in a secure environment, e.g., TEE. The authorization module can freshly authorize each request based on its input. Extensive experiments show that TransLinkGuard achieves the same security protection as the black-box security guarantees with negligible overhead.


Extracting Norms from Contracts Via ChatGPT: Opportunities and Challenges

Haque, Amanul, Singh, Munindar P.

arXiv.org Artificial Intelligence

We investigate the effectiveness of ChatGPT in extracting norms from contracts. Norms provide a natural way to engineer multiagent systems by capturing how to govern the interactions between two or more autonomous parties. We extract norms of commitment, prohibition, authorization, and power, along with associated norm elements (the parties involved, antecedents, and consequents) from contracts. Our investigation reveals ChatGPT's effectiveness and limitations in norm extraction from contracts. ChatGPT demonstrates promising performance in norm extraction without requiring training or fine-tuning, thus obviating the need for annotated data, which is not generally available in this domain. However, we found some limitations of ChatGPT in extracting these norms that lead to incorrect norm extractions. The limitations include oversight of crucial details, hallucination, incorrect parsing of conjunctions, and empty norm elements. Enhanced norm extraction from contracts can foster the development of more transparent and trustworthy formal agent interaction specifications, thereby contributing to the improvement of multiagent systems.