To make ethics computable, we've adopted an approach to ethics that involves considering multiple prima facie duties in deciding how one should act in an ethical dilemma. We believe this approach is more likely to capture the complexities of ethical decision making than a single, absolute-duty ethical theory. However, it requires a decision procedure for determining the ethically correct action when the duties give conflicting advice. To solve this problem, we employ inductive-logic programming to enable a machine to abstract information from ethical experts' intuitions about particular ethical dilemmas, to create a decision principle. We've tested our method in the MedEthEx proof-of-concept system, using a type of ethical dilemma that involves 18 possible combinations of three prima facie duties.
The deontic logic DUS is a Deontic Update Semantics for prescriptive obligations based on the update semantics of Veltman. In DUS the definition of logical validity of obligations is not based on static truth values but on dynamic action transitions. In this paper prescriptive defeasible obligations are formalized in update semantics and the diagnostic problem of defeasible deontic logic is discussed. Assume a defeasible obligation `normally A ought to be (done)' together withthe fact `A is not (done).' Is this an exception of the normality claim, or is it a violation of the obligation? In this paper we formalize the heuristic principle that it is a violation, unless there is a more specific overriding obligation. The underlying motivation from legal reasoning is that criminals should have as little opportunities as possible to excuse themselves by claiming that their behavior was exceptional rather than criminal.
While type causality helps us to understand general relationships such as the etiology of a disease (smoking causing lung cancer), token causality aims to explain causal connections in specific instantiated events, such as the diagnosis of a patient (Ravi’s developing lung cancer after a 20-year smoking habit). Understanding why something happened, as in these examples, is central to reasoning in such diverse cases as the diagnosis of patients, understanding why the US financial market collapsed in 2007 and finding a causal explanation for Obama’s victory over Clinton in the US primary. However, despite centuries of work in philosophy and decades of research in computer science, the problem of how to rigorously formalize token causality and how to automate such reasoning has remained unsolved. In this paper, we show how to use type-level causal relationships, represented as temporal logic formulas, together with philosophical principles, to reason about these token-level cases.
The OSCAR project has two parallel goals rathe formulation of a general theory of rationality, and the creation of a general-purpose automated reasoner implementing that theory. The theory of rationality takes as its starting point my own philosophical work in epistemology, probability, and philosophical logic. The current status of the OSCAR project is described in detail in my justcompleted book Cognitive Carpentry (probably to be published by Bradford/MIT Press). At this point I have produced a general architecture for rational cognition, and an initial implementation of that architecture. The architecture constitutes an interest-driven defeasible reasoner that takes its data from various kinds of perceptual inputs, tries to answer questions posed by practical cognition, and uses those answers to direct its actions.