Goto

Collaborating Authors

Blameworthiness in Strategic Games

arXiv.org Artificial Intelligence

There are multiple notions of coalitional responsibility. The focus of this paper is on the blameworthiness defined through the principle of alternative possibilities: a coalition is blamable for a statement if the statement is true, but the coalition had a strategy to prevent it. The main technical result is a sound and complete bimodal logical system that describes properties of blameworthiness in one-shot games.


The Limits of Morality in Strategic Games

arXiv.org Artificial Intelligence

A coalition is blameable for an outcome if the coalition had a strategy to prevent it. It has been previously suggested that the cost of prevention, or the cost of sacrifice, can be used to measure the degree of blameworthiness. The paper adopts this approach and proposes a modal logical system for reasoning about the degree of blameworthiness. The main technical result is a completeness theorem for the proposed system.


Duty to Warn in Strategic Games

arXiv.org Artificial Intelligence

The paper investigates the second-order blameworthiness or duty to warn modality "one coalition knew how another coalition could have prevented an outcome". The main technical result is a sound and complete logical system that describes the interplay between the distributed knowledge and the duty to warn modalities.


Comprehension and Knowledge

arXiv.org Artificial Intelligence

The ability of an agent to comprehend a sentence is tightly connected to the agent's prior experiences and background knowledge. The paper suggests to interpret comprehension as a modality and proposes a complete bimodal logical system that describes an interplay between comprehension and knowledge modalities.


Epistemic Logic of Know-Who

arXiv.org Artificial Intelligence

The paper suggests a definition of "know who" as a modality using Grove-Halpern semantics of names. It also introduces a logical system that describes the interplay between modalities "knows who", "knows", and "for all agents". The main technical result is a completeness theorem for the proposed system.