Kampik, Timotheus
Can Proof Assistants Verify Multi-Agent Systems?
Mendez, Julian Alfredo, Kampik, Timotheus
This paper presents the Soda language for verifying multi-agent systems. Soda is a high-level functional and object-oriented language that supports the compilation of its code not only to Scala, a strongly statically typed high-level programming language, but also to Lean, a proof assistant and programming language. Given these capabilities, Soda can implement multi-agent systems, or parts thereof, that can then be integrated into a mainstream software ecosystem on the one hand and formally verified with state-of-the-art tools on the other hand. We provide a brief and informal introduction to Soda and the aforementioned interoperability capabilities, as well as a simple demonstration of how interaction protocols can be designed and verified with Soda. In the course of the demonstration, we highlight challenges with respect to real-world applicability.
Disagree and Commit: Degrees of Argumentation-based Agreements
Kampik, Timotheus, Nieves, Juan Carlos
In cooperative human decision-making, agreements are often not total; a partial degree of agreement is sufficient to commit to a decision and move on, as long as one is somewhat confident that the involved parties are likely to stand by their commitment in the future, given no drastic unexpected changes. In this paper, we introduce the notion of agreement scenarios that allow artificial autonomous agents to reach such agreements, using formal models of argumentation, in particular abstract argumentation and value-based argumentation. We introduce the notions of degrees of satisfaction and (minimum, mean, and median) agreement, as well as a measure of the impact a value in a value-based argumentation framework has on these notions. We then analyze how degrees of agreement are affected when agreement scenarios are expanded with new information, to shed light on the reliability of partial agreements in dynamic scenarios. An implementation of the introduced concepts is provided as part of an argumentation-based reasoning software library.
Contribution Functions for Quantitative Bipolar Argumentation Graphs: A Principle-based Analysis
Kampik, Timotheus, Potyka, Nico, Yin, Xiang, Čyras, Kristijonas, Toni, Francesca
In formal argumentation, arguments and their relations are typically represented as directed graphs, in which nodes are arguments and edges are argument relationships (typically: attack or support). From these argumentation graphs, inferences about the acceptability statuses or strengths of arguments are drawn. One formal argumentation approach that is gaining increased research attention is Quantitative Bipolar Argumentation (QBA). In QBA, (typically numerical) weights - so-called initial strengths - are assigned to arguments, and arguments are connected by a support and an attack relation. Hence, arguments directly connected to a node through the node's incoming edges can be referred to as attackers and supporters (depending on the relation). Given a Quantitative Bipolar Argumentation Graph (QBAG), an argumentation semantics then infers the arguments' final strengths; intuitively, an argument's attackers tend to decrease its final strength, whereas supporters tend to increase it.
Timeline-based Process Discovery
Kaur, Harleen, Mendling, Jan, Rubensson, Christoffer, Kampik, Timotheus
A key concern of automatic process discovery is to provide insights into performance aspects of business processes. Waiting times are of particular importance in this context. For that reason, it is surprising that current techniques for automatic process discovery generate directly-follows graphs and comparable process models, but often miss the opportunity to explicitly represent the time axis. In this paper, we present an approach for automatically constructing process models that explicitly align with a time axis. We exemplify our approach for directly-follows graphs. Our evaluation using two BPIC datasets and a proprietary dataset highlight the benefits of this representation in comparison to standard layout techniques.
Large Process Models: Business Process Management in the Age of Generative AI
Kampik, Timotheus, Warmuth, Christian, Rebmann, Adrian, Agam, Ron, Egger, Lukas N. P., Gerber, Andreas, Hoffart, Johannes, Kolk, Jonas, Herzig, Philipp, Decker, Gero, van der Aa, Han, Polyvyanyy, Artem, Rinderle-Ma, Stefanie, Weber, Ingo, Weidlich, Matthias
The continued success of Large Language Models (LLMs) and other generative artificial intelligence approaches highlights the advantages that large information corpora can have over rigidly defined symbolic models, but also serves as a proof-point of the challenges that purely statistics-based approaches have in terms of safety and trustworthiness. As a framework for contextualizing the potential, as well as the limitations of LLMs and other foundation model-based technologies, we propose the concept of a Large Process Model (LPM) that combines the correlation power of LLMs with the analytical precision and reliability of knowledge-based systems and automated reasoning approaches. LPMs are envisioned to directly utilize the wealth of process management experience that experts have accumulated, as well as process performance data of organizations with diverse characteristics, e.g., regarding size, region, or industry. In this vision, the proposed LPM would allow organizations to receive context-specific (tailored) process and other business models, analytical deep-dives, and improvement recommendations. As such, they would allow to substantially decrease the time and effort required for business transformation, while also allowing for deeper, more impactful, and more actionable insights than previously possible. We argue that implementing an LPM is feasible, but also highlight limitations and research challenges that need to be solved to implement particular aspects of the LPM vision.
ACROCPoLis: A Descriptive Framework for Making Sense of Fairness
Tubella, Andrea Aler, Mollo, Dimitri Coelho, Lindström, Adam Dahlgren, Devinney, Hannah, Dignum, Virginia, Ericson, Petter, Jonsson, Anna, Kampik, Timotheus, Lenaerts, Tom, Mendez, Julian Alfredo, Nieves, Juan Carlos
Fairness is central to the ethical and responsible development and use of AI systems, with a large number of frameworks and formal notions of algorithmic fairness being available. However, many of the fairness solutions proposed revolve around technical considerations and not the needs of and consequences for the most impacted communities. We therefore want to take the focus away from definitions and allow for the inclusion of societal and relational aspects to represent how the effects of AI systems impact and are experienced by individuals and social groups. In this paper, we do this by means of proposing the ACROCPoLis framework to represent allocation processes with a modeling emphasis on fairness aspects. The framework provides a shared vocabulary in which the factors relevant to fairness assessments for different situations and procedures are made explicit, as well as their interrelationships. This enables us to compare analogous situations, to highlight the differences in dissimilar situations, and to capture differing interpretations of the same situation by different stakeholders. CCS Concepts: Computer systems organization Embedded systems; Redundancy; Robotics; Networks Network reliability. INTRODUCTION Fairness is a fundamental aspect of justice, and central to a democratic society [50]. It is therefore unsurprising that justice and fairness are at the core of current discussions about the ethics of the development and use of AI systems. Given that people often associate fairness with consistency and accuracy, the idea that our decisions as well as the decisions affecting us can become fairer by replacing human judgment with automated, numerical systems, is appealing [1, 16, 24]. All authors contributed equally to this research. Authors listed alphabetically Authors' addresses: Andrea Aler Tubella, andrea.aler@umu.se, Nevertheless, current research and journalistic investigations have identified issues with discrimination, bias and lack of fairness in a variety of AI applications [41].
Conversational Process Modelling: State of the Art, Applications, and Implications in Practice
Klievtsova, Nataliia, Benzin, Janik-Vasily, Kampik, Timotheus, Mangler, Juergen, Rinderle-Ma, Stefanie
Chatbots such as ChatGPT have caused a tremendous hype lately. For BPM applications, it is often not clear how to apply chatbots to generate business value. Hence, this work aims at the systematic analysis of existing chatbots for their support of conversational process modelling as process-oriented capability. Application scenarios are identified along the process life cycle. Then a systematic literature review on conversational process modelling is performed. The resulting taxonomy serves as input for the identification of application scenarios for conversational process modelling, including paraphrasing and improvement of process descriptions. The application scenarios are evaluated for existing chatbots based on a real-world test set from the higher education domain. It contains process descriptions as well as corresponding process models, together with an assessment of the model quality. Based on the literature and application scenario analyses, recommendations for the usage (practical implications) and further development (research directions) of conversational process modelling are derived.
Online Handbook of Argumentation for AI: Volume 2
OHAAI Collaboration, null, Brannstrom, Andreas, Castagna, Federico, Duchatelle, Theo, Foulis, Matt, Kampik, Timotheus, Kuhlmann, Isabelle, Malmqvist, Lars, Morveli-Espinoza, Mariela, Mumford, Jack, Pandzic, Stipe, Schaefer, Robin, Thorburn, Luke, Xydis, Andreas, Yuste-Ginel, Antonio, Zheng, Heng
This volume contains revised versions of the papers selected for the second volume of the Online Handbook of Argumentation for AI (OHAAI). Previously, formal theories of argument and argument interaction have been proposed and studied, and this has led to the more recent study of computational models of argument. Argumentation, as a field within artificial intelligence (AI), is highly relevant for researchers interested in symbolic representations of knowledge and defeasible reasoning. The purpose of this handbook is to provide an open access and curated anthology for the argumentation research community. OHAAI is designed to serve as a research hub to keep track of the latest and upcoming PhD-driven research on the theory and application of argumentation in all areas related to AI.
Online Handbook of Argumentation for AI: Volume 1
OHAAI Collaboration, null, Castagna, Federico, Kampik, Timotheus, Zafarghandi, Atefeh Keshavarzi, Lafages, Mickaël, Mumford, Jack, Rodosthenous, Christos T., Sá, Samy, Sarkadi, Stefan, Singleton, Joseph, Skiba, Kenneth, Xydis, Andreas
This volume contains revised versions of the papers selected for the first volume of the Online Handbook of Argumentation for AI (OHAAI). Previously, formal theories of argument and argument interaction have been proposed and studied, and this has led to the more recent study of computational models of argument. Argumentation, as a field within artificial intelligence (AI), is highly relevant for researchers interested in symbolic representations of knowledge and defeasible reasoning. The purpose of this handbook is to provide an open access and curated anthology for the argumentation research community. OHAAI is designed to serve as a research hub to keep track of the latest and upcoming PhD-driven research on the theory and application of argumentation in all areas related to AI.
Towards Explainability for a Civilian UAV Fleet Management using an Agent-based Approach
Mualla, Yazan, Najjar, Amro, Kampik, Timotheus, Tchappi, Igor, Galland, Stéphane, Nicolle, Christophe
This paper presents an initial design concept and specification of a civilian Unmanned Aerial Vehicle (UAV) management simulation system that focuses on explainability for the human-in-the-loop control of semi-autonomous UAVs. The goal of the system is to facilitate the operator intervention in critical scenarios (e.g. avoid safety issues or financial risks). Explainability is supported via user-friendly abstractions on Belief-Desire-Intention agents. To evaluate the effectiveness of the system, a human-computer interaction study is proposed.