Goto

Collaborating Authors

 access control policy


Securing AI Agent Execution

Bühler, Christoph, Biagiola, Matteo, Di Grazia, Luca, Salvaneschi, Guido

arXiv.org Artificial Intelligence

Large Language Models (LLMs) have evolved into AI agents that interact with external tools and environments to perform complex tasks. The Model Context Protocol (MCP) has become the de facto standard for connecting agents with such resources, but security has lagged behind: thousands of MCP servers execute with unrestricted access to host systems, creating a broad attack surface. In this paper, we introduce AgentBound, the first access control framework for MCP servers. AgentBound combines a declarative policy mechanism, inspired by the Android permission model, with a policy enforcement engine that contains malicious behavior without requiring MCP server modifications. We build a dataset containing the 296 most popular MCP servers, and show that access control policies can be generated automatically from source code with 80.9% accuracy. We also show that AgentBound blocks the majority of security threats in several malicious MCP servers, and that policy enforcement engine introduces negligible overhead. Our contributions provide developers and project managers with a practical foundation for securing MCP servers while maintaining productivity, enabling researchers and tool builders to explore new directions for declarative access control and MCP security.


Exploring Large Language Models for Access Control Policy Synthesis and Summarization

Vatsa, Adarsh, Hall, Bethel, Eiers, William

arXiv.org Artificial Intelligence

Cloud computing is ubiquitous, with a growing number of services being hosted on the cloud every day. Typical cloud compute systems allow administrators to write policies implementing access control rules which specify how access to private data is governed. These policies must be manually written, and due to their complexity can often be error prone. Moreover, existing policies often implement complex access control specifications and thus can be difficult to precisely analyze in determining their behavior works exactly as intended. Recently, Large Language Models (LLMs) have shown great success in automated code synthesis and summarization. Given this success, they could potentially be used for automatically generating access control policies or aid in understanding existing policies. In this paper, we explore the effectiveness of LLMs for access control policy synthesis and summarization. Specifically, we first investigate diverse LLMs for access control policy synthesis, finding that: although LLMs can effectively generate syntactically correct policies, they have permissiveness issues, generating policies equivalent to the given specification 45.8% of the time for non-reasoning LLMs, and 93.7% of the time for reasoning LLMs. We then investigate how LLMs can be used to analyze policies by introducing a novel semantic-based request summarization approach which leverages LLMs to generate a precise characterization of the requests allowed by a policy. Our results show that while there are significant hurdles in leveraging LLMs for automated policy generation, LLMs show promising results when combined with symbolic approaches in analyzing existing policies.


LLM-Driven Auto Configuration for Transient IoT Device Collaboration

Shastri, Hetvi, Hanafy, Walid A., Wu, Li, Irwin, David, Srivastava, Mani, Shenoy, Prashant

arXiv.org Artificial Intelligence

Today's Internet of Things (IoT) has evolved from simple sensing and actuation devices to those with embedded processing and intelligent services, enabling rich collaborations between users and their devices. However, enabling such collaboration becomes challenging when transient devices need to interact with host devices in temporarily visited environments. In such cases, fine-grained access control policies are necessary to ensure secure interactions; however, manually implementing them is often impractical for non-expert users. Moreover, at run-time, the system must automatically configure the devices and enforce such fine-grained access control rules. Additionally, the system must address the heterogeneity of devices. In this paper, we present CollabIoT, a system that enables secure and seamless device collaboration in transient IoT environments. CollabIoT employs a Large language Model (LLM)-driven approach to convert users' high-level intents to fine-grained access control policies. To support secure and seamless device collaboration, CollabIoT adopts capability-based access control for authorization and uses lightweight proxies for policy enforcement, providing hardware-independent abstractions. We implement a prototype of CollabIoT's policy generation and auto configuration pipelines and evaluate its efficacy on an IoT testbed and in large-scale emulated environments. We show that our LLM-based policy generation pipeline is able to generate functional and correct policies with 100% accuracy. At runtime, our evaluation shows that our system configures new devices in ~150 ms, and our proxy-based data plane incurs network overheads of up to 2 ms and access control overheads up to 0.3 ms.


Say What You Mean: Natural Language Access Control with Large Language Models for Internet of Things

Cheng, Ye, Xu, Minghui, Zhang, Yue, Li, Kun, Wu, Hao, Zhang, Yechao, Guo, Shaoyong, Qiu, Wangjie, Yu, Dongxiao, Cheng, Xiuzhen

arXiv.org Artificial Intelligence

Access control in the Internet of Things (IoT) is becoming increasingly complex, as policies must account for dynamic and contextual factors such as time, location, user behavior, and environmental conditions. However, existing platforms either offer only coarse-grained controls or rely on rigid rule matching, making them ill-suited for semantically rich or ambiguous access scenarios. Moreover, the policy authoring process remains fragmented: domain experts describe requirements in natural language, but developers must manually translate them into code, introducing semantic gaps and potential misconfiguration. In this work, we present LACE, the Language-based Access Control Engine, a hybrid framework that leverages large language models (LLMs) to bridge the gap between human intent and machine-enforceable logic. LACE combines prompt-guided policy generation, retrieval-augmented reasoning, and formal validation to support expressive, interpretable, and verifiable access control. It enables users to specify policies in natural language, automatically translates them into structured rules, validates semantic correctness, and makes access decisions using a hybrid LLM-rule-based engine. We evaluate LACE in smart home environments through extensive experiments. LACE achieves 100% correctness in verified policy generation and up to 88% decision accuracy with 0.79 F1-score using DeepSeek-V3, outperforming baselines such as GPT-3.5 and Gemini. The system also demonstrates strong scalability under increasing policy volume and request concurrency. Our results highlight LACE's potential to enable secure, flexible, and user-friendly access control across real-world IoT platforms.


Synthesizing Access Control Policies using Large Language Models

Vatsa, Adarsh, Patel, Pratyush, Eiers, William

arXiv.org Artificial Intelligence

Cloud compute systems allow administrators to write access control policies that govern access to private data. While policies are written in convenient languages, such as AWS Identity and Access Management Policy Language, manually written policies often become complex and error prone. In this paper, we investigate whether and how well Large Language Models (LLMs) can be used to synthesize access control policies. Our investigation focuses on the task of taking an access control request specification and zero-shot prompting LLMs to synthesize a well-formed access control policy which correctly adheres to the request specification. We consider two scenarios, one which the request specification is given as a concrete list of requests to be allowed or denied, and another in which a natural language description is used to specify sets of requests to be allowed or denied. We then argue that for zero-shot prompting, more precise and structured prompts using a syntax based approach are necessary and experimentally show preliminary results validating our approach.


LMN: A Tool for Generating Machine Enforceable Policies from Natural Language Access Control Rules using LLMs

Sonune, Pratik, Rai, Ritwik, Sural, Shamik, Atluri, Vijayalakshmi, Kundu, Ashish

arXiv.org Artificial Intelligence

Access control is a fundamental security requirement in any organization for ensuring that only authorized users can access certain information or resources under specific conditions. While enforcement needs to be done in computer systems, access control policies are typically decided by the higher management. For example, in a university system, the Department Chair, Dean and the Provost may take a decision on who can access which object (like Conference room printers, Graduate studies applications, Faculty tenure support letters, etc.) at the Department, School and University level, respectively. Such decisions are often noted down as meeting minutes, email exchanges, or other forms of documentation in a natural language like English (hereinafter referred to as Natural Language Access Control Policies, i.e., NLACPs). For information system level implementation of such decisions, System Security Officers (SSOs) must translate the NLACPs into Machine Enforceable Security Policies (MESPs) using a target access control model like Role-based Access Control (RBAC) or Attribute-based Access Control (ABAC). It is apparent that manual conversion of NLACPs into MESPs not only demands time and resource, it is also error prone, especially for large organizations with dynamically changing policies.


RAGent: Retrieval-based Access Control Policy Generation

Jayasundara, Sakuna Harinda, Arachchilage, Nalin Asanka Gamagedara, Russello, Giovanni

arXiv.org Artificial Intelligence

Manually generating access control policies from an organization's high-level requirement specifications poses significant challenges. It requires laborious efforts to sift through multiple documents containing such specifications and translate their access requirements into access control policies. Also, the complexities and ambiguities of these specifications often result in errors by system administrators during the translation process, leading to data breaches. However, the automated policy generation frameworks designed to help administrators in this process are unreliable due to limitations, such as the lack of domain adaptation. Therefore, to improve the reliability of access control policy generation, we propose RAGent, a novel retrieval-based access control policy generation framework based on language models. RAGent identifies access requirements from high-level requirement specifications with an average state-of-the-art F1 score of 87.9%. Through retrieval augmented generation, RAGent then translates the identified access requirements into access control policies with an F1 score of 77.9%. Unlike existing frameworks, RAGent generates policies with complex components like purposes and conditions, in addition to subjects, actions, and resources. Moreover, RAGent automatically verifies the generated policies and iteratively refines them through a novel verification-refinement mechanism, further improving the reliability of the process by 3%, reaching the F1 score of 80.6%. We also introduce three annotated datasets for developing access control policy generation frameworks in the future, addressing the data scarcity of the domain.


Learning Attribute-Based and Relationship-Based Access Control Policies with Unknown Values

Bui, Thang, Stoller, Scott D.

arXiv.org Artificial Intelligence

Attribute-Based Access Control (ABAC) and Relationship-based access control (ReBAC) provide a high level of expressiveness and flexibility that promote security and information sharing, by allowing policies to be expressed in terms of attributes of and chains of relationships between entities. Algorithms for learning ABAC and ReBAC policies from legacy access control information have the potential to significantly reduce the cost of migration to ABAC or ReBAC. This paper presents the first algorithms for mining ABAC and ReBAC policies from access control lists (ACLs) and incomplete information about entities, where the values of some attributes of some entities are unknown. We show that the core of this problem can be viewed as learning a concise three-valued logic formula from a set of labeled feature vectors containing unknowns, and we give the first algorithm (to the best of our knowledge) for that problem.


A Precedent Approach to Assigning Access Rights

Belim, S. V., Bogachenko, N. F., Kabanov, A. N.

arXiv.org Artificial Intelligence

To design a discretionary access control policy, a technique is proposed that uses the principle of analogies and is based on both the properties of objects and the properties of subjects. As attributes characterizing these properties, the values of the security attributes of subjects and objects are chosen. The concept of precedent is defined as an access rule explicitly specified by the security administrator. The problem of interpolation of the access matrix is formulated: the security administrator defines a sequence of precedents, it is required to automate the process of filling the remaining cells of the access matrix. On the family of sets of security attributes, a linear order is introduced. The principles of filling the access matrix on the basis of analogy with the dominant precedent in accordance with a given order relation are developed. The analysis of the proposed methodology is performed and its main advantages are revealed. Keywords: security model, discretionary access control, case based access control, discretionary security policy.