Goto

Collaborating Authors

 Oren, Nir


Eliciting Rational Initial Weights in Gradual Argumentation

arXiv.org Artificial Intelligence

After the seminal work of [1], the argumentation community's focus has shifted to the relation between arguments, abstracting the content of the arguments. Thus, debates or discussions are represented as directed graphs, with arguments as nodes and attacks between arguments as directed edges. Several extension-based semantics have been defined to obtain conclusions from argumentation graphs. Such semantics identify subsets of arguments (called extensions), representing consistent conclusions [2, 3, 4]. Motivated by the work of [5], researchers started to focus on semantics which could give a more gradual view on arguments' acceptability by ranking them from the "less attacked" to the most (not necessarily based on the cardinality or quality of the attackers) [6].


Responsibility-aware Strategic Reasoning in Probabilistic Multi-Agent Systems

arXiv.org Artificial Intelligence

Responsibility plays a key role in the development and deployment of trustworthy autonomous systems. In this paper, we focus on the problem of strategic reasoning in probabilistic multi-agent systems with responsibility-aware agents. We introduce the logic PATL+R, a variant of Probabilistic Alternating-time Temporal Logic. The novelty of PATL+R lies in its incorporation of modalities for causal responsibility, providing a framework for responsibility-aware multi-agent strategic reasoning. We present an approach to synthesise joint strategies that satisfy an outcome specified in PATL+R, while optimising the share of expected causal responsibility and reward. This provides a notion of balanced distribution of responsibility and reward gain among agents. To this end, we utilise the Nash equilibrium as the solution concept for our strategic reasoning problem and demonstrate how to compute responsibility-aware Nash equilibrium strategies via a reduction to parametric model checking of concurrent stochastic multi-player games.


Measuring Responsibility in Multi-Agent Systems

arXiv.org Artificial Intelligence

We introduce a family of quantitative measures of responsibility in multi-agent planning, building upon the concepts of causal responsibility proposed by Parker et al.~[ParkerGL23]. These concepts are formalised within a variant of probabilistic alternating-time temporal logic. Unlike existing approaches, our framework ascribes responsibility to agents for a given outcome by linking probabilities between behaviours and responsibility through three metrics, including an entropy-based measurement of responsibility. This latter measure is the first to capture the causal responsibility properties of outcomes over time, offering an asymptotic measurement that reflects the difficulty of achieving these outcomes. Our approach provides a fresh understanding of responsibility in multi-agent systems, illuminating both the qualitative and quantitative aspects of agents' roles in achieving or preventing outcomes.


An Extension-based Approach for Computing and Verifying Preferences in Abstract Argumentation

arXiv.org Artificial Intelligence

We present an extension-based approach for computing and verifying preferences in an abstract argumentation system. Although numerous argumentation semantics have been developed previously for identifying acceptable sets of arguments from an argumentation framework, there is a lack of justification behind their acceptability based on implicit argument preferences. Preference-based argumentation frameworks allow one to determine what arguments are justified given a set of preferences. Our research considers the inverse of the standard reasoning problem, i.e., given an abstract argumentation framework and a set of justified arguments, we compute what the possible preferences over arguments are. Furthermore, there is a need to verify (i.e., assess) that the computed preferences would lead to the acceptable sets of arguments. This paper presents a novel approach and algorithm for exhaustively computing and enumerating all possible sets of preferences (restricted to three identified cases) for a conflict-free set of arguments in an abstract argumentation framework. We prove the soundness, completeness and termination of the algorithm. The research establishes that preferences are determined using an extension-based approach after the evaluation phase (acceptability of arguments) rather than stated beforehand. In this work, we focus our research study on grounded, preferred and stable semantics. We show that the complexity of computing sets of preferences is exponential in the number of arguments, and thus, describe an approximate approach and algorithm to compute the preferences. Furthermore, we present novel algorithms for verifying (i.e., assessing) the computed preferences. We provide details of the implementation of the algorithms (source code has been made available), various experiments performed to evaluate the algorithms and the analysis of the results.


Abstract Weighted Based Gradual Semantics in Argumentation Theory

arXiv.org Artificial Intelligence

Weighted gradual semantics provide an acceptability degree to each argument representing the strength of the argument, computed based on factors including background evidence for the argument, and taking into account interactions between this argument and others. We introduce four important problems linking gradual semantics and acceptability degrees. First, we reexamine the inverse problem, seeking to identify the argument weights of the argumentation framework which lead to a specific final acceptability degree. Second, we ask whether the function mapping between argument weights and acceptability degrees is injective or a homeomorphism onto its image. Third, we ask whether argument weights can be found when preferences, rather than acceptability degrees for arguments are considered. Fourth, we consider the topology of the space of valid acceptability degrees, asking whether gaps exist in this space. While different gradual semantics have been proposed in the literature, in this paper, we identify a large family of weighted gradual semantics, called abstract weighted based gradual semantics. These generalise many of the existing semantics while maintaining desirable properties such as convergence to a unique fixed point. We also show that a sub-family of the weighted gradual semantics, called abstract weighted (Lp,lambda,mu,A)-based gradual semantics and which include well-known semantics, solve all four of the aforementioned problems.


Revisiting MAB based approaches to recursive delegation

arXiv.org Artificial Intelligence

Open multi-agent systems (MAS) are composed of agents under different organisational control, and whose internal goals and mental states cannot be observed. In such systems, agents often have differing capabilities, and must rely on each other when pursuing their goals, making task delegation commonplace. This delegation occurs when one agent (the delegator) requests that another (the delegatee), execute a task. A fundamental problem faced by the delegator involves selecting the most appropriate delegatee to whom the task should be delegated, and a significant body of work centred around trust and reputation systems has examined how such a delegation decision should take place [6, 12, 13]. At their heart, trust and reputation systems associate a rating with each potential delegatee, and select who to delegate a task to based on this rating. Following task execution, the rating is updated based on how well the task was completed. Different systems compute the ratings differently, for example incorporating indirect information from other agents in the system [8, 14], or utilising social and cognitive concepts as part of the computation process [4]. Trust and reputation systems can also differ in the way they select a delegatee, for example by using the rating to weigh the likelihood of selection. While trust and reputation systems seek to satisfy many properties including resistance to different types of attacks by malicious agents [7], at their heart, they balance the exploration of delegatee behaviour with the exploitation of high quality delagatees.


Evaluation of Human-Understandability of Global Model Explanations using Decision Tree

arXiv.org Artificial Intelligence

In explainable artificial intelligence (XAI) research, the predominant focus has been on interpreting models for experts and practitioners. Model agnostic and local explanation approaches are deemed interpretable and sufficient in many applications. However, in domains like healthcare, where end users are patients without AI or domain expertise, there is an urgent need for model explanations that are more comprehensible and instil trust in the model's operations. We hypothesise that generating model explanations that are narrative, patient-specific and global(holistic of the model) would enable better understandability and enable decision-making. We test this using a decision tree model to generate both local and global explanations for patients identified as having a high risk of coronary heart disease. These explanations are presented to non-expert users. We find a strong individual preference for a specific type of explanation. The majority of participants prefer global explanations, while a smaller group prefers local explanations. A task based evaluation of mental models of these participants provide valuable feedback to enhance narrative global explanations. This, in turn, guides the design of health informatics systems that are both trustworthy and actionable.


Inferring Attack Relations for Gradual Semantics

arXiv.org Artificial Intelligence

A gradual semantics takes a weighted argumentation framework as input and outputs a final acceptability degree for each argument, with different semantics performing the computation in different manners. In this work, we consider the problem of attack inference. That is, given a gradual semantics, a set of arguments with associated initial weights, and the final desirable acceptability degrees associated with each argument, we seek to determine whether there is a set of attacks on those arguments such that we can obtain these acceptability degrees. The main contribution of our work is to demonstrate that the associated decision problem, i.e., whether a set of attacks can exist which allows the final acceptability degrees to occur for given initial weights, is NP-complete for the weighted h-categoriser and cardinality-based semantics, and is polynomial for the weighted max-based semantics, even for the complete version of the problem (where all initial weights and final acceptability degrees are known). We then briefly discuss how this decision problem can be modified to find the attacks themselves and conclude by examining the partial problem where not all initial weights or final acceptability degrees may be known.


The Inverse Problem for Argumentation Gradual Semantics

arXiv.org Artificial Intelligence

Gradual semantics with abstract argumentation provide each argument with a score reflecting its acceptability, i.e. how "much" it is attacked by other arguments. Many different gradual semantics have been proposed in the literature, each following different principles and producing different argument rankings. A sub-class of such semantics, the so-called weighted semantics, takes, in addition to the graph structure, an initial set of weights over the arguments as input, with these weights affecting the resultant argument ranking. In this work, we consider the inverse problem over such weighted semantics. That is, given an argumentation framework and a desired argument ranking, we ask whether there exist initial weights such that a particular semantics produces the given ranking. The contribution of this paper are: (1) an algorithm to answer this problem, (2) a characterisation of the properties that a gradual semantics must satisfy for the algorithm to operate, and (3) an empirical evaluation of the proposed algorithm.


Norm Identification through Plan Recognition

arXiv.org Artificial Intelligence

Societal rules, as exemplified by norms, aim to provide a degree of behavioural stability to multi-agent societies. Norms regulate a society using the deontic concepts of permissions, obligations and prohibitions to specify what can, must and must not occur in a society. Many implementations of normative systems assume various combinations of the following assumptions: that the set of norms is static and defined at design time; that agents joining a society are instantly informed of the complete set of norms; that the set of agents within a society does not change; and that all agents are aware of the existing norms. When any one of these assumptions is dropped, agents need a mechanism to identify the set of norms currently present within a society, or risk unwittingly violating the norms. In this paper, we develop a norm identification mechanism that uses a combination of parsing-based plan recognition and Hierarchical Task Network (HTN) planning mechanisms, which operates by analysing the actions performed by other agents. While our basic mechanism cannot learn in situations where norm violations take place, we describe an extension which is able to operate in the presence of violations.