In this paper we introduce an abstract theory of normative reasoning, whose central notion is the generation of obligations, permissions and institutional facts from conditional norms. We present various semantics and their proof systems. The theory can be used to classify and compare new candidates for standards of normative reasoning, and to explore more elaborate forms of normative reasoning than studied thus far.
Aggregative deontic detachment is a new form of deontic detachment that keeps track of previously detached obligations. We argue that it handles iteration of successive detachments in a more principled manner than the traditional systems do. To study this new form of deontic detachment, we introduce a 'minimal' logic for aggregative deontic detachment, and we discuss various properties of the logic.
In this paper we introduce a formal framework for the construction of normative multiagent systems, based on Searle's notion of the construction of social reality. Within the structure of normative multiagent systems we distinguish between regulative norms that describe obligations, prohibitions and permissions, and constitutive norms that regulate the creation of institutional facts as well as the modification of the normative system itself. Using the metaphor of normative systems as agents, we attribute mental attitudes to the normative system.
Deontic logic is shown to be applicable for modelling human reasoning. For this the Wason selection task and the suppression task are discussed in detail. Different versions of modelling norms with deontic logic are introduced and in the case of the Wason selection task it is demonstrated how differences in the performance of humans in the abstract and in the social contract case can be explained. Furthermore it is shown that an automated theorem prover can be used as a reasoning tool for deontic logic.
In this paper we describe a software assistant agent that can proactively assist human users situated in a time-constrained environment to perform normative reasoning--reasoning about prohibitions and obligations--so that the user can focus on her planning objectives. In order to provide proactive assistance, the agent must be able to 1) recognize the user's planned activities, 2) reason about potential needs of assistance associated with those predicted activities, and 3) plan to provide appropriate assistance suitable for newly identified user needs. To address these specific requirements, we develop an agent architecture that integrates user intention recognition, normative reasoning over a user's intention, and planning, execution and replanning for assistive actions. This paper presents the agent architecture and discusses practical applications of this approach.