Goto

Collaborating Authors

 Modgil, Sanjay


Towards Dialogues for Joint Human-AI Reasoning and Value Alignment

arXiv.org Artificial Intelligence

We argue that enabling human-AI dialogue, purposed to support joint reasoning (i.e., 'inquiry'), is important for ensuring that AI decision making is aligned with human values and preferences. In particular, we point to logic-based models of argumentation and dialogue, and suggest that the traditional focus on persuasion dialogues be replaced by a focus on inquiry dialogues, and the distinct challenges that joint inquiry raises. Given recent dramatic advances in the performance of large language models (LLMs), and the anticipated increase in their use for decision making, we provide a roadmap for research into inquiry dialogues for supporting joint human-LLM reasoning tasks that are ethically salient, and that thereby require that decisions are value aligned.


Moral Uncertainty and the Problem of Fanaticism

arXiv.org Artificial Intelligence

While there is universal agreement that agents ought to act ethically, there is no agreement as to what constitutes ethical behaviour. To address this problem, recent philosophical approaches to `moral uncertainty' propose aggregation of multiple ethical theories to guide agent behaviour. However, one of the foundational proposals for aggregation - Maximising Expected Choiceworthiness (MEC) - has been criticised as being vulnerable to fanaticism; the problem of an ethical theory dominating agent behaviour despite low credence (confidence) in said theory. Fanaticism thus undermines the `democratic' motivation for accommodating multiple ethical perspectives. The problem of fanaticism has not yet been mathematically defined. Representing moral uncertainty as an instance of social welfare aggregation, this paper contributes to the field of moral uncertainty by 1) formalising the problem of fanaticism as a property of social welfare functionals and 2) providing non-fanatical alternatives to MEC, i.e. Highest k-trimmed Mean and Highest Median.


On the Graded Acceptability of Arguments in Abstract and Instantiated Argumentation

arXiv.org Artificial Intelligence

The paper develops a formal theory of the degree of justification of arguments, which relies solely on the structure of an argumentation framework, and which can be successfully interfaced with approaches to instantiated argumentation. The theory is developed in three steps. First, the paper introduces a graded generalization of the two key notions underpinning Dung's semantics: self-defense and conflict-freeness. This leads to a natural generalization of Dung's semantics, whereby standard extensions are weakened or strengthened depending on the level of self-defense and conflict-freeness they meet. The paper investigates the fixpoint theory of these semantics, establishing existence results for them. Second, the paper shows how graded semantics readily provide an approach to argument rankings, offering a novel contribution to the recently growing research programme on ranking-based semantics. Third, this novel approach to argument ranking is applied and studied in the context of instantiated argumentation frameworks, and in so doing is shown to account for a simple form of accrual of arguments within the Dung paradigm. Finally, the theory is compared in detail with existing approaches.


A General Account of Argumentation with Preferences

arXiv.org Artificial Intelligence

This paper builds on the recent ASPIC+ formalism, to develop a general framework for argumentation with preferences. We motivate a revised definition of conflict free sets of arguments, adapt ASPIC+ to accommodate a broader range of instantiating logics, and show that under some assumptions, the resulting framework satisfies key properties and rationality postulates. We then show that the generalised framework accommodates Tarskian logic instantiations extended with preferences, and then study instantiations of the framework by classical logic approaches to argumentation. We conclude by arguing that ASPIC+'s modelling of defeasible inference rules further testifies to the generality of the framework, and then examine and counter recent critiques of Dung's framework and its extensions to accommodate preferences.


Towards an Argumentation System for Supporting Patients in Self-Managing Their Chronic Conditions

AAAI Conferences

CONSULT is a decision-support framework designed to help patients self-manage chronic conditions and adhere to agreed-upon treatment plans, in collaboration with healthcare professionals. The approach taken employs computational argumentation, a logic-based methodology that provides a formal means for reasoning with evidence by substantiating claims for and against particular conclusions. This paper outlines the architecture of CONSULT, illustrating how facts are gathered about the patient and various preferences of the patient and the clinician(s) involved. A logic-based representation of official treatment guidelines by various public health agencies is presented. Logical arguments are constructed from these facts and guidelines; these arguments are analysed to resolve inconsistencies concerning various treatment options and patient/clinician preferences. The claims of the justified arguments are the decisions recommended by CONSULT. A clinical example is presented which illustrates the use of CONSULT within the context of blood pressure management for secondary stroke prevention.


On the Graded Acceptability of Arguments

AAAI Conferences

The paper develops a formal theory of the degree of justification of arguments, which relies solely on the structure of an argumentation framework. The theory is based on a generalisation of Dung’s notion of acceptability, making it sensitive to the numbers of attacks and counter-attacks on arguments. Graded generalisations of argumentation semantics are then obtained and studied. The theory is applied by showing how it can arbitrate between competing preferred extensions and how it captures a specific form of accrual in instantiated argumentation.