Goto

Collaborating Authors

 Ferreira, João F.


Rango: Adaptive Retrieval-Augmented Proving for Automated Software Verification

arXiv.org Artificial Intelligence

Formal verification using proof assistants, such as Coq, enables the creation of high-quality software. However, the verification process requires significant expertise and manual effort to write proofs. Recent work has explored automating proof synthesis using machine learning and large language models (LLMs). This work has shown that identifying relevant premises, such as lemmas and definitions, can aid synthesis. We present Rango, a fully automated proof synthesis tool for Coq that automatically identifies relevant premises and also similar proofs from the current project and uses them during synthesis. Rango uses retrieval augmentation at every step of the proof to automatically determine which proofs and premises to include in the context of its fine-tuned LLM. In this way, Rango adapts to the project and to the evolving state of the proof. We create a new dataset, CoqStoq, of 2,226 open-source Coq projects and 196,929 theorems from GitHub, which includes both training data and a curated evaluation benchmark of well-maintained projects. On this benchmark, Rango synthesizes proofs for 32.0% of the theorems, which is 29% more theorems than the prior state-of-the-art tool Tactician. Our evaluation also shows that Rango adding relevant proofs to its context leads to a 47% increase in the number of theorems proven.


Framer: Planning Models from Natural Language Action Descriptions

AAAI Conferences

In this paper, we describe an approach for learning planning domain models directly from natural language (NL) descriptions of activity sequences. The modelling problem has been identified as a bottleneck for the widespread exploitation of various technologies in Artificial Intelligence, including automated planners. There have been great advances in modelling assisting and model generation tools, including a wide range of domain model acquisition tools. However, for modelling tools, there is the underlying assumption that the user can formulate the problem using some formal language. And even in the case of the domain model acquisition tools, there is still a requirement to specify input plans in an easily machine readable format. Providing this type of input is impractical for many potential users. This motivates us to generate planning domain models directly from NL descriptions, as this would provide an important step in extending the widespread adoption of planning techniques. We start from NL descriptions of actions and use NL analysis to construct structured representations, from which we construct formal representations of the action sequences. The generated action sequences provide the necessary structured input for inducing a PDDL domain, using domain model acquisition technology. In order to capture a concise planning model, we use an estimate of functional similarity, so sentences that describe similar behaviours are represented by the same planning operator. We validate our approach with a user study, where participants are tasked with describing the activities occurring in several videos. Then our system is used to learn planning domain models using the participants' NL input. We demonstrate that our approach is effective at learning models on these tasks.