Lorini, Emiliano
Rational Capability in Concurrent Games
Li, Yinfeng, Lorini, Emiliano, Mittelmann, Munyque
We extend concurrent game structures (CGSs) with a simple notion of preference over computations and define a minimal notion of rationality for agents based on the concept of dominance. We use this notion to interpret a CL and an ATL languages that extend the basic CL and ATL languages with modalities for rational capability, namely, a coalition's capability to rationally enforce a given property. For each of these languages, we provide results about the complexity of satisfiability checking and model checking as well as about axiomatization.
A Computationally Grounded Framework for Cognitive Attitudes (extended version)
de Lima, Tiago, Lorini, Emiliano, Perrotin, Elise, Schwarzentruber, Franรงois
We introduce a novel language for reasoning about agents' cognitive attitudes of both epistemic and motivational type. We interpret it by means of a computationally grounded semantics using belief bases. Our language includes five types of modal operators for implicit belief, complete attraction, complete repulsion, realistic attraction and realistic repulsion. We give an axiomatization and show that our operators are not mutually expressible and that they can be combined to represent a large variety of psychological concepts including ambivalence, indifference, being motivated, being demotivated and preference. We present a dynamic extension of the language that supports reasoning about the effects of belief change operations. Finally, we provide a succinct formulation of model checking for our languages and a PSPACE model checking algorithm relying on a reduction into TQBF. We present some experimental results for the implemented algorithm on computation time in a concrete example.
Responsibility in a Multi-Value Strategic Setting
Parker, Timothy, Grandi, Umberto, Lorini, Emiliano
Responsibility is a key notion in multi-agent systems and in creating safe, reliable and ethical AI. In particular, the evaluation of choices based on responsibility is useful for making robustly good decisions in unpredictable domains. However, most previous work on responsibility has only considered responsibility for single outcomes, limiting its application. In this paper we present a model for responsibility attribution in a multi-agent, multi-value setting. We also expand our model to cover responsibility anticipation, demonstrating how considerations of responsibility can help an agent to select strategies that are in line with its values. In particular we show that non-dominated regret-minimising strategies reliably minimise an agent's expected degree of responsibility.
Computational Grounding of Responsibility Attribution and Anticipation in LTLf
De Giacomo, Giuseppe, Lorini, Emiliano, Parker, Timothy, Parretti, Gianmarco
Responsibility is one of the key notions in machine ethics and in the area of autonomous systems. It is a multi-faceted notion involving counterfactual reasoning about actions and strategies. In this paper, we study different variants of responsibility in a strategic setting based on LTLf. We show a connection with notions in reactive synthesis, including synthesis of winning, dominant, and best-effort strategies. This connection provides the building blocks for a computational grounding of responsibility including complexity characterizations and sound, complete, and optimal algorithms for attributing and anticipating responsibility.
Anticipating Responsibility in Multiagent Planning
Parker, Timothy, Grandi, Umberto, Lorini, Emiliano
Responsibility anticipation is the process of determining if the actions of an individual agent may cause it to be responsible for a particular outcome. This can be used in a multi-agent planning setting to allow agents to anticipate responsibility in the plans they consider. The planning setting in this paper includes partial information regarding the initial state and considers formulas in linear temporal logic as positive or negative outcomes to be attained or avoided. We firstly define attribution for notions of active, passive and contributive responsibility, and consider their agentive variants. We then use these to define the notion of responsibility anticipation. We prove that our notions of anticipated responsibility can be used to coordinate agents in a planning setting and give complexity results for our model, discussing equivalence with classical planning. We also present an outline for solving some of our attribution and anticipation problems using PDDL solvers.
Base-based Model Checking for Multi-Agent Only Believing (long version)
de Lima, Tiago, Lorini, Emiliano, Schwarzentruber, Franรงois
We present a novel semantics for the language of multi-agent only believing exploiting belief bases, and show how to use it for automatically checking formulas of this language and of its dynamic extension with private belief expansion operators. We provide a PSPACE algorithm for model checking relying on a reduction to QBF and alternative dedicated algorithm relying on the exploration of the state space. We present an implementation of the QBF-based algorithm and some experimental results on computation time in a concrete example.
Modelling and Explaining Legal Case-based Reasoners through Classifiers
Liu, Xinghan, Lorini, Emiliano, Rotolo, Antonino, Sartor, Giovanni
This paper brings together two lines of research: factor-based models of case-based reasoning (CBR) and the logical specification of classifiers. Logical approaches to classifiers capture the connection between features and outcomes in classifier systems. Factor-based reasoning is a popular approach to reasoning by precedent in AI & Law. Horty (2011) has developed the factor-based models of precedent into a theory of precedential constraint. In this paper we combine the modal logic approach (binary-input classifier, BLC) to classifiers and their explanations given by Liu & Lorini (2021) with Horty's account of factor-based CBR, since both a classifier and CBR map sets of features to decisions or classifications. We reformulate case bases of Horty in the language of BCL, and give several representation results. Furthermore, we show how notions of CBR, e.g. reason, preference between reasons, can be analyzed by notions of classifier system.
A logic for binary classifiers and their explanation
Liu, Xinghan, Lorini, Emiliano
Recent years have witnessed a renewed interest in Boolean function in explaining binary classifiers in the field of explainable AI (XAI). The standard approach of Boolean function is propositional logic. We present a modal language of a ceteris paribus nature which supports reasoning about binary classifiers and their properties. We study families of decision models for binary classifiers, axiomatize them and show completeness of our axiomatics. Moreover, we prove that the variant of our modal language with finite propositional atoms interpreted over these models is NP-complete. We leverage the language to formalize counterfactual conditional as well as a bunch of notions of explanation such as abductive, contrastive and counterfactual explanations, and biases. Finally, we present two extensions of our language: a dynamic extension by the notion of assignment enabling classifier change and an epistemic extension in which the classifier's uncertainty about the actual input can be represented.
A Qualitative Theory of Cognitive Attitudes and their Change
Lorini, Emiliano
Since the seminal work of Hintikka on epistemic logic [28], of Von Wright on the logic of preference [55, 56] and of Cohen & Levesque on the logic of intention [19], many formal logics for reasoning about cognitive attitudes of agents such as knowledge and belief [24], preference [32, 48], desire [23], intention [44, 30] and their combination [38, 54] have been proposed. Generally speaking, these logics are nothing but formal models of rational agency relying on the idea that an agent endowed with cognitive attitudes makes decisions on the basis of what she believes and of what she desires or prefers. The idea of describing rational agents in terms of their epistemic and motivational attitudes is something that these logics share with classical decision theory and game theory. Classical decision theory and game theory provide a quantitative account of individual and strategic decision-making by assuming that agents' beliefs and desires can be respectively modeled by subjective probabilities and utilities. Qualitative approaches to individual and strategic decision-making have been proposed in AI [16, 22] to characterize criteria that a rational agent should adopt for making decisions when she cannot build a probability distribution over the set of possible events and her preference over the set of possible outcomes cannot be expressed by a utility function but only by a qualitative ordering over the outcomes.
Exploiting Belief Bases for Building Rich Epistemic Structures
Lorini, Emiliano
We introduce a semantics for epistemic logic exploiting a belief base abstraction. Differently from existing Kripke-style semantics for epistemic logic in which the notions of possible world and epistemic alternative are primitive, in the proposed semantics they are non-primitive but are defined from the concept of belief base. We show that this semantics allows us to define the universal epistemic model in a simpler and more compact way than existing inductive constructions of it. We provide (i) a number of semantic equivalence results for both the basic epistemic language with "individual belief" operators and its extension by the notion of "only believing", and (ii) a lower bound complexity result for epistemic logic model checking relative to the universal epistemic model.