Goto

Collaborating Authors

Blameworthiness in Strategic Games

arXiv.org Artificial Intelligence

There are multiple notions of coalitional responsibility. The focus of this paper is on the blameworthiness defined through the principle of alternative possibilities: a coalition is blamable for a statement if the statement is true, but the coalition had a strategy to prevent it. The main technical result is a sound and complete bimodal logical system that describes properties of blameworthiness in one-shot games.


The Limits of Morality in Strategic Games

arXiv.org Artificial Intelligence

A coalition is blameable for an outcome if the coalition had a strategy to prevent it. It has been previously suggested that the cost of prevention, or the cost of sacrifice, can be used to measure the degree of blameworthiness. The paper adopts this approach and proposes a modal logical system for reasoning about the degree of blameworthiness. The main technical result is a completeness theorem for the proposed system.


Duty to Warn in Strategic Games

arXiv.org Artificial Intelligence

The paper investigates the second-order blameworthiness or duty to warn modality "one coalition knew how another coalition could have prevented an outcome". The main technical result is a sound and complete logical system that describes the interplay between the distributed knowledge and the duty to warn modalities.


Strategic Coalitions in Stochastic Games

arXiv.org Artificial Intelligence

The article introduces a notion of a stochastic game with failure states and proposes two logical systems with modality "coalition has a strategy to transition to a non-failure state with a given probability while achieving a given goal." The logical properties of this modality depend on whether the modal language allows the empty coalition. The main technical results are a completeness theorem for a logical system with the empty coalition, a strong completeness theorem for the logical system without the empty coalition, and an incompleteness theorem which shows that there is no strongly complete logical system in the language with the empty coalition.1. Introduction In this article we study coalition power in stochastic games. An example of such a game is the road situation depicted in Figure 1. In this situation, self-driving car a is trying to pass self-driving car b . Unexpectedly, a truck moving in the opposite direction appears on the road. For the sake of simplicity, we assume that cars a and b have only three strategies: slowdown (), maintain the current speed (0), and accelerate (). We also assume that the truck is too heavy to significantly change the speed before a possible collision. The diagram in Figure 2 describes probabilities of different outcomes of all possible combinations of actions of cars a and b . This diagram has five states: state p is the current ("passing") state of the system.


Strategic Coalitions With Perfect Recall

AAAI Conferences

The paper proposes a bimodal logic that describes an interplay between distributed knowledge modality and coalition know-how modality. Unlike other similar systems, the one proposed here assumes perfect recall by all agents. Perfect recall is captured in the system by a single axiom. The main technical results are the soundness and the completeness theorems for the proposed logical system.