Goto

Collaborating Authors

 mealy machine


Simplicity Lies in the Eye of the Beholder: A Strategic Perspective on Controllers in Reactive Synthesis

Randour, Mickael

arXiv.org Artificial Intelligence

In the game-theoretic approach to controller synthesis, we model the interaction between a system to be controlled and its environment as a game between these entities, and we seek an appropriate (e.g., winning or optimal) strategy for the system. This strategy then serves as a formal blueprint for a real-world controller. A common belief is that simple (e.g., using limited memory) strategies are better: corresponding controllers are easier to conceive and understand, and cheaper to produce and maintain. This invited contribution focuses on the complexity of strategies in a variety of synthesis contexts. We discuss recent results concerning memory and randomness, and take a brief look at what lies beyond our traditional notions of complexity for strategies.


A "good regulator theorem" for embodied agents

Virgo, Nathaniel, Biehl, Martin, Baltieri, Manuel, Capucci, Matteo

arXiv.org Artificial Intelligence

In a classic paper, Conant and Ashby claimed that "every good regulator of a system must be a model of that system." Artificial Life has produced many examples of systems that perform tasks with apparently no model in sight; these suggest Conant and Ashby's theorem doesn't easily generalise beyond its restricted setup. Nevertheless, here we show that a similar intuition can be fleshed out in a different way: whenever an agent is able to perform a regulation task, it is possible for an observer to interpret it as having "beliefs" about its environment, which it "updates" in response to sensory input. This notion of belief updating provides a notion of model that is more sophisticated than Conant and Ashby's, as well as a theorem that is more broadly applicable. However, it necessitates a change in perspective, in that the observer plays an essential role in the theory: models are not a mere property of the system but are imposed on it from outside. Our theorem holds regardless of whether the system is regulating its environment in a classic control theory setup, or whether it's regulating its own internal state; the model is of its environment either way. The model might be trivial, however, and this is how the apparent counterexamples are resolved.


Data-Driven Probabilistic Evaluation of Logic Properties with PAC-Confidence on Mealy Machines

Plambeck, Swantje, Salamati, Ali, Huellermeier, Eyke, Fey, Goerschwin

arXiv.org Artificial Intelligence

Cyber-Physical Systems (CPS) are complex systems that require powerful models for tasks like verification, diagnosis, or debugging. Often, suitable models are not available and manual extraction is difficult. Data-driven approaches then provide a solution to, e.g., diagnosis tasks and verification problems based on data collected from the system. In this paper, we consider CPS with a discrete abstraction in the form of a Mealy machine. We propose a data-driven approach to determine the safety probability of the system on a finite horizon of n time steps. The approach is based on the Probably Approximately Correct (P AC) learning paradigm. Thus, we elaborate a connection between discrete logic and probabilistic reachability analysis of systems, especially providing an additional confidence on the determined probability. The learning process follows an active learning paradigm, where new learning data is sampled in a guided way after an initial learning set is collected. We validate the approach with a case study on an automated lane-keeping system.


State Matching and Multiple References in Adaptive Active Automata Learning

Kruger, Loes, Junges, Sebastian, Rot, Jurriaan

arXiv.org Artificial Intelligence

Active automata learning (AAL) is a method to infer state machines by interacting with black-box systems. Adaptive AAL aims to reduce the sample complexity of AAL by incorporating domain specific knowledge in the form of (similar) reference models. Such reference models appear naturally when learning multiple versions or variants of a software system. In this paper, we present state matching, which allows flexible use of the structure of these reference models by the learner. State matching is the main ingredient of adaptive L#, a novel framework for adaptive learning, built on top of L#. Our empirical evaluation shows that adaptive L# improves the state of the art by up to two orders of magnitude.


The single-use restriction for register automata and transducers over infinite alphabets

Stefański, Rafał

arXiv.org Artificial Intelligence

This thesis studies the single-use restriction for register automata and transducers over infinite alphabets. The restriction requires that a read-access to a register should have the side effect of destroying its contents. This constraint results in robust classes of languages and transductions. For automata models, we show that one-way register automata, two-way register automata, and orbit-finite monoids have the same expressive power. For transducer models, we show that single-use Mealy machines and single-use two-way transducers admit versions of the Krohn-Rhodes decomposition theorem. Moreover, single-use Mealy machines are equivalent to an algebraic model called local algebraic semigroup transductions. Additionally, we show that single-use two-way transducers are equivalent to single-use streaming string transducers (SSTs) over infinite alphabets and to regular list functions with atoms. Compared with the previous work arXiv:1907.10504, this thesis offers a coherent narrative on the single-use restriction. We introduce an abstract notion of single-use functions and use them to define all the discussed single-use models. We also introduce and study the algebraic models of local semigroup transduction and local rational semigroup transduction.


Output-decomposed Learning of Mealy Machines

Koenders, Rick, Moerman, Joshua

arXiv.org Artificial Intelligence

We present an active automata learning algorithm which learns a decomposition of a finite state machine, based on projecting onto individual outputs. This is dual to a recent compositional learning algorithm by Labbaf et al. (2023). When projecting the outputs to a smaller set, the model itself is reduced in size. By having several such projections, we do not lose any information and the full system can be reconstructed. Depending on the structure of the system this reduces the number of queries drastically, as shown by a preliminary evaluation of the algorithm.


Active Learning of Mealy Machines with Timers

Bruyère, Véronique, Garhewal, Bharat, Pérez, Guillermo A., Staquet, Gaëtan, Vaandrager, Frits W.

arXiv.org Artificial Intelligence

We present the first algorithm for query learning of a general class of Mealy machines with timers (MMTs) in a black-box context. Our algorithm is an extension of the L# algorithm of Vaandrager et al. to a timed setting. Like the algorithm for learning timed automata proposed by Waga, our algorithm is inspired by ideas of Maler & Pnueli. Based on the elementary languages of, both Waga's and our algorithm use symbolic queries, which are then implemented using finitely many concrete queries. However, whereas Waga needs exponentially many concrete queries to implement a single symbolic query, we only need a polynomial number. This is because in order to learn a timed automaton, a learner needs to determine the exact guard and reset for each transition (out of exponentially many possibilities), whereas for learning an MMT a learner only needs to figure out which of the preceding transitions caused a timeout. As shown in our previous work, this can be done efficiently for a subclass of MMTs that are race-avoiding: if a timeout is caused by a preceding input then a slight change in the timing of this input will induce a corresponding change in the timing of the timeout ("wiggling"). Experiments with a prototype implementation, written in Rust, show that our algorithm is able to efficiently learn realistic benchmarks.


Correct-by-Construction Design of Contextual Robotic Missions Using Contracts

Mallozzi, Piergiuseppe, Nuzzo, Pierluigi, Piterman, Nir, Schneider, Gerardo, Pelliccione, Patrizio

arXiv.org Artificial Intelligence

Effectively specifying and implementing robotic missions poses a set of challenges to software engineering for robotic systems. These challenges stem from the need to formalize and execute a robot's high-level tasks while considering various application scenarios and conditions, also known as contexts, in real-world operational environments. Writing correct mission specifications that explicitly account for multiple contexts can be tedious and error-prone. Furthermore, as the number of contexts, and consequently the complexity of the specification, increases, generating a correct-by-construction implementation (e.g., by using synthesis methods) can become intractable. A viable approach to address these issues is to decompose the mission specification into smaller, manageable sub-missions, with each sub-mission tailored to a specific context. Nevertheless, this compositional approach introduces its own set of challenges in ensuring the overall mission's correctness. In this paper, we propose a novel compositional framework for specifying and implementing contextual robotic missions using assume-guarantee contracts. The mission specification is structured in a hierarchical and modular fashion, allowing for each sub-mission to be synthesized as an independent robot controller. We address the problem of dynamically switching between sub-mission controllers while ensuring correctness under predefined conditions.


Dynamic Shielding for Reinforcement Learning in Black-Box Environments

Waga, Masaki, Castellano, Ezequiel, Pruekprasert, Sasinee, Klikovits, Stefan, Takisaka, Toru, Hasuo, Ichiro

arXiv.org Artificial Intelligence

It is challenging to use reinforcement learning (RL) in cyberphysical systems due to the lack of safety guarantees during learning. Although there have been various proposals to reduce undesired behaviors during learning, most of these techniques require prior system knowledge, and their applicability is limited. This paper aims to reduce undesired behaviors during learning without requiring any prior system knowledge. We propose dynamic shielding: an extension of a model-based safe RL technique called shielding using data-driven automata learning. The dynamic shielding technique constructs an approximate system model in parallel with RL using a variant of the RPNI algorithm and suppresses undesired explorations due to the shield constructed from the learned model. Through this combination, potentially unsafe actions can be foreseen before the agent experiences them. Experiments show that our dynamic shield significantly decreases the number of undesired events during training.


Online Learning of Non-Markovian Reward Models

Rens, Gavin, Raskin, Jean-François, Reynouad, Raphaël, Marra, Giuseppe

arXiv.org Artificial Intelligence

There are situations in which an agent should receive rewards only after having accomplished a series of previous tasks, that is, rewards are non-Markovian. One natural and quite general way to represent history-dependent rewards is via a Mealy machine, a finite state automaton that produces output sequences from input sequences. In our formal setting, we consider a Markov decision process (MDP) that models the dynamics of the environment in which the agent evolves and a Mealy machine synchronized with this MDP to formalize the non-Markovian reward function. While the MDP is known by the agent, the reward function is unknown to the agent and must be learned. Our approach to overcome this challenge is to use Angluin's $L^*$ active learning algorithm to learn a Mealy machine representing the underlying non-Markovian reward machine (MRM). Formal methods are used to determine the optimal strategy for answering so-called membership queries posed by $L^*$. Moreover, we prove that the expected reward achieved will eventually be at least as much as a given, reasonable value provided by a domain expert. We evaluate our framework on three problems. The results show that using $L^*$ to learn an MRM in a non-Markovian reward decision process is effective.