Blameworthiness in Strategic Games Artificial Intelligence

There are multiple notions of coalitional responsibility. The focus of this paper is on the blameworthiness defined through the principle of alternative possibilities: a coalition is blamable for a statement if the statement is true, but the coalition had a strategy to prevent it. The main technical result is a sound and complete bimodal logical system that describes properties of blameworthiness in one-shot games.

The Limits of Morality in Strategic Games Artificial Intelligence

A coalition is blameable for an outcome if the coalition had a strategy to prevent it. It has been previously suggested that the cost of prevention, or the cost of sacrifice, can be used to measure the degree of blameworthiness. The paper adopts this approach and proposes a modal logical system for reasoning about the degree of blameworthiness. The main technical result is a completeness theorem for the proposed system.

Strategic Coalitions in Stochastic Games Artificial Intelligence

The article introduces a notion of a stochastic game with failure states and proposes two logical systems with modality "coalition has a strategy to transition to a non-failure state with a given probability while achieving a given goal." The logical properties of this modality depend on whether the modal language allows the empty coalition. The main technical results are a completeness theorem for a logical system with the empty coalition, a strong completeness theorem for the logical system without the empty coalition, and an incompleteness theorem which shows that there is no strongly complete logical system in the language with the empty coalition.1. Introduction In this article we study coalition power in stochastic games. An example of such a game is the road situation depicted in Figure 1. In this situation, self-driving car a is trying to pass self-driving car b . Unexpectedly, a truck moving in the opposite direction appears on the road. For the sake of simplicity, we assume that cars a and b have only three strategies: slowdown (), maintain the current speed (0), and accelerate (). We also assume that the truck is too heavy to significantly change the speed before a possible collision. The diagram in Figure 2 describes probabilities of different outcomes of all possible combinations of actions of cars a and b . This diagram has five states: state p is the current ("passing") state of the system.

Strategic Coalitions With Perfect Recall

AAAI Conferences

The paper proposes a bimodal logic that describes an interplay between distributed knowledge modality and coalition know-how modality. Unlike other similar systems, the one proposed here assumes perfect recall by all agents. Perfect recall is captured in the system by a single axiom. The main technical results are the soundness and the completeness theorems for the proposed logical system.

Blameworthiness in Games with Imperfect Information Artificial Intelligence

Blameworthiness of an agent or a coalition of agents is often defined in terms of the principle of alternative possibilities: for the coalition to be responsible for an outcome, the outcome must take place and the coalition should have had a strategy to prevent it. In this paper we argue that in the settings with imperfect information, not only should the coalition have had a strategy, but it also should have known that it had a strategy, and it should have known what the strategy was. The main technical result of the paper is a sound and complete bimodal logic that describes the interplay between knowledge and blameworthiness in strategic games with imperfect information.