Deontic logic is the logic to reason about ideal and actual behaviour. Besides the traditional role as an underlying logic for law and ethics (for a survey see [MW]), in the realm of computer science, deontic logic has been proposed as a logic for the specification of legal expert systems [BMT87],[Sta80], authorization mechanisms [ML85], decision support systems [Kd.,88], [Lee88b],[Lee88a], database security rules [GMP89], fault-tolerant software [KM87],[Coe], and database integrity constraints[WMW89], [WWMD91]. A survey of applications can be found in [WM]. In all these areas, we must be able to reason about the difference between ideal and actual behaviour. In many cases, it is important to distinguish ought-to--do statements (which express imperatives of the form "an actor ought to perform an action") from ought-to--be statements (which express a desired state of affairs without necessarily mentioning actors and actions that have a relation with that state of affairs). There are situations where we would like to relate the two oughts with each other. For example, suppose we want to specify deontic integrity constraints for a bank data base.
We explore the notions of permission and obligation and their role in knowledge representation, especially as guides to action for planning systems. We first present a simple conditional deontic logic (or more accurately a preference logic) of the type common in the literature and demonstrate its equivalence to a number of modal and conditional systems for default reasoning. We show how the techniques of conditional default reasoning can be used to derive factual preferences from conditional preferences. We extend the system to account for the effect of beliefs on an agent's obligations, including beliefs held by default. This leads us to the notion of a conditional goal, goals toward which an agent should strive according to its belief state. We then extend the system (somewhat naively) to model the ability of an agent to perform actions. Even with this simple account, we are able to show that the deontic slogan "make the best of a bad situation" gives rise to several interpretations or strategies for determining goals (and actions). We show that an agent can improve its decisions and focus its goals by making observations, or increasing its knowledge of the world. Finally, we discuss how this model might be extended and used in the planning process, especially to represent planning under uncertainty in a qualitative manner.
Box 1738 3000 DR Rotterdam, the Netherlands YTANQFAC.FBK.EUR..NL Abstract Deontic logic, the logic of obligations and permissions, is plagued by several paradoxes that have to be understood before deontic logic can be used as a knowledge representation language. In this paper we extend the temporal analysis of Chishohn's paradox using a deontic logic that combines temporal and preferential notions. Introduction Deontic logic is a modal logic in which Op is read as'p ought to be (done).' Deontic logic has traditionally been used by philosophers to analyze the structure of the normative use of language. In the eighties deontic logic had a revival, when it was discovered by computer scientists that this logic can be used for the formal specification and validation of a wide variety of topics in computer science (for an overview and further references see (Wieringa & Meyer 1993)). The advantage is that norms can be violated without creating an inconsistency in the formal specification, in contrast to violations of hard constraints. Another application is the use of deontic logic to represent legal reasoning in legal expert systems in artificial intelligence. Legal expert systems have to be able to reason about legal rules and documents such as for example a trade contract.
We suggest that mechanized multi-agent deontic logics might be appropriate vehicles for engineering trustworthy robots. Mechanically checked proofs in such logics can serve to establish the permissibility (or obligatoriness) of agent actions, and such proofs, when translated into English, can also explain the rationale behind those actions. We use the logical framework Athena to encode a natural deduction system for a deontic logic recently proposed by Horty for reasoning about what agents ought to do. We present the syntax and semantics of the logic, discuss its encoding in Athena, and illustrate with an example of a mechanized proof.