Safe, Untrusted, "Proof-Carrying" AI Agents: toward the agentic lakehouse
Tagliabue, Jacopo, Greco, Ciro
–arXiv.org Artificial Intelligence
Starting from this prototype, we conclude by outlining practical next steps for a full agentic lakehouse. The paper is organized as follows. After reviewing agent-friendly abstractions (Section II), we address key safety objections for high-stakes scenarios (Section III). Once safety is established, we describe a ReAct [12] loop built on these abstractions (Section IV). We put forward our working prototype as a feasibility demonstration of safe-by-design data agents, not as a full-fledged experimental benchmark. We believe that sharing working code is of great value to the community, especially in times of quickly shifting mental models. However, it is important to remember that our fundamental insights - programmability and safety - can be replicated independently of the chosen APIs. For these reasons, we believe our paper to be valuable to a wide range of practitioners: on one hand, those looking for a new mental map of this uncharted territory; on the other, those looking to be inspired by tinkering with existing implementations and inspecting systems working at scale.
arXiv.org Artificial Intelligence
Oct-13-2025
- Country:
- Europe > Germany (0.04)
- North America > United States (0.05)
- Genre:
- Research Report (0.42)
- Technology: