Matskevich, Sergey
Trustworthy Formal Natural Language Specifications
Gordon, Colin S., Matskevich, Sergey
Interactive proof assistants are computer programs carefully constructed to check a human-designed proof of a mathematical claim with high confidence in the implementation. However, this only validates truth of a formal claim, which may have been mistranslated from a claim made in natural language. This is especially problematic when using proof assistants to formally verify the correctness of software with respect to a natural language specification. The translation from informal to formal remains a challenging, time-consuming process that is difficult to audit for correctness. This paper shows that it is possible to build support for specifications written in expressive subsets of natural language, within existing proof assistants, consistent with the principles used to establish trust and auditability in proof assistants themselves. We implement a means to provide specifications in a modularly extensible formal subset of English, and have them automatically translated into formal claims, entirely within the Lean proof assistant. Our approach is extensible (placing no permanent restrictions on grammatical structure), modular (allowing information about new words to be distributed alongside libraries), and produces proof certificates explaining how each word was interpreted and how the sentence's structure was used to compute the meaning. We apply our prototype to the translation of various English descriptions of formal specifications from a popular textbook into Lean formalizations; all can be translated correctly with a modest lexicon with only minor modifications related to lexicon size.
Building Helpful Virtual Agents Using Plan Recognition and Planning
Geib, Christopher (Drexel University) | Weerasinghe, Janith (Drexel University) | Matskevich, Sergey (Drexel University) | Kantharaju, Pavan (Drexel University) | Craenen, Bart (Newcastle University) | Petrick, Ronald P. A. (Heriot-Watt University)
This paper presents a new model of cooperative behavior based on the interaction of plan recognition and automated planning. Based on observations of the actions of an "initiator" agent, a "supporter" agent uses plan recognition to hypothesize the plans and goals of the initiator. The supporter agent then proposes and plans for a set of subgoals it will achieve to help the initiator. The approach is demonstrated in an open-source, virtual robot platform.