Hunter, Anthony


Argument Harvesting Using Chatbots

arXiv.org Artificial Intelligence

Much research in computational argumentation assumes that arguments and counterarguments can be obtained in some way. Yet, to improve and apply models of argument, we need methods for acquiring them. Current approaches include argument mining from text, hand coding of arguments by researchers, or generating arguments from knowledge bases. In this paper, we propose a new approach, which we call argument harvesting, that uses a chatbot to enter into a dialogue with a participant to get arguments and counterarguments from him or her. Because it is automated, the chatbot can be used repeatedly in many dialogues, and thereby it can generate a large corpus. We describe the architecture of the chatbot, provide methods for managing a corpus of arguments and counterarguments, and an evaluation of our approach in a case study concerning attitudes of women to participation in sport.


Towards a Unified Framework for Syntactic Inconsistency Measures

AAAI Conferences

A number of proposals have been made to define inconsistency measures. Each has its rationale. But to date, it is not clear how to delineate the space of options for measures, nor is it clear how we can classify measures systematically. In this paper, we introduce a general framework for comparing syntactic inconsistency measures. It uses the construction of an inconsistency graph for each knowledgebase. We then introduce abstractions of the inconsistency graph and use the hierarchy of the abstractions to classify a range of inconsistency measures.



Strategic Sequences of Arguments for Persuasion Using Decision Trees

AAAI Conferences

Persuasion is an activity that involves one party (the persuader) trying to induce another party (the persuadee) to believe or do something. For this, it can be advantageous forthe persuader to have a model of the persuadee. Recently, some proposals in the field of computational models of argument have been made for probabilistic models of what the persuadee knows about, or believes. However, these developments have not systematically harnessed established notions in decision theory for maximizing the outcome of a dialogue. To address this, we present a general framework for representing persuasion dialogues as a decision tree, and for using decision rules for selecting moves. Furthermore, we provide some empirical results showing how some well-known decision rules perform, and make observations about their general behaviour in the context of dialogues where there is uncertainty about the accuracy of the user model.


On Partial Information and Contradictions in Probabilistic Abstract Argumentation

AAAI Conferences

We provide new insights into the area of combining abstract argumentation frameworks with probabilistic reasoning. In particular, we consider the scenario when assessments on the probabilities of a subset of the arguments is given and the probabilities of the remaining arguments have to be derived, taking both the topology of the argumentation framework and principles of probabilistic reasoning into account. We generalize this scenario by also considering inconsistent assessments, i.e., assessments that contradict the topology of the argumentation framework. Building on approaches to inconsistency measurement, we present a general framework to measure the amount of conflict of these assessments and provide a method for inconsistent-tolerant reasoning.


Modelling the Persuadee in Asymmetric Argumentation Dialogues for Persuasion

AAAI Conferences

Computational models of argument could play a valuable role in persuasion technologies for behaviour change (e.g. persuading a user to eat a more healthy diet, or to drink less, or to take more exercise, or to study more conscientiously, etc). For this, the system (the persuader) could present arguments to convince the user (the persuadee). In this paper, we consider asymmetric dialogues where only the system presents arguments, and the system maintains a model of the user to determine the best choice of arguments to present (including counterarguments to key arguments believed to be held by the user). The focus of the paper is on the user model, including how we update it as the dialogue progresses, and how we use it to make optimal choices for dialogue moves.


Inducing Probability Distributions from Knowledge Bases with (In)dependence Relations

AAAI Conferences

When merging belief sets from different agents, the result is normally a consistent belief set in which the inconsistency between the original sources is not represented. As probability theory is widely used to represent uncertainty, an interesting question therefore is whether it is possible to induce a probability distribution when merging belief sets. To this end, we first propose two approaches to inducing a probability distribution on a set of possible worlds, by extending the principle of indifference on possible worlds. We then study how the (in)dependence relations between atoms can influence the probability distribution. We also propose a set of properties to regulate the merging of belief sets when a probability distribution is output. Furthermore, our merging operators satisfy the well known Konieczny and Pino-Perez postulates if we use the set of possible worlds which have the maximal induced probability values. Our study shows that taking an induced probability distribution as a merging result can better reflect uncertainty and inconsistency among the original knowledge bases.


Incorporating Classical Logic Argumentation into Policy-based Inconsistency Management in Relational Databases

AAAI Conferences

Inconsistency management policies allow a relational database user to express customized ways for managing inconsistency according to his need. For each functional dependency, a user has a library of applicable policies, each of them with constraints, requirements, and preferences for their application, that can contradict each other. The problem that we address in this work is that of determining a subset of these policies that are suitable for application w.r.t. the set of constraints and user preferences. We propose a classical logic argumentation-based solution, which is a natural approach given that integrity constraints in databases and data instances are, in general, expressed in first order logic (FOL). An automatic argumentation-based selection process allows to retain some of the characteristics of the kind of reasoning that a human would perform in this situation.