Goto

Collaborating Authors

 argument scheme


Argumentative Debates for Transparent Bias Detection [Technical Report]

Ayoobi, Hamed, Potyka, Nico, Rapberger, Anna, Toni, Francesca

arXiv.org Artificial Intelligence

As the use of AI in society grows, addressing emerging biases is essential to prevent systematic discrimination. Several bias detection methods have been proposed, but, with few exceptions, these tend to ignore transparency. Instead, interpretability and explainability are core requirements for algorithmic fairness, even more so than for other algorithmic solutions, given the human-oriented nature of fairness. We present ABIDE (Argumentative BIas detection by DEbate), a novel framework that structures bias detection transparently as debate, guided by an underlying argument graph as understood in (formal and computational) argumentation. The arguments are about the success chances of groups in local neighbourhoods and the significance of these neighbourhoods. We evaluate ABIDE experimentally and demonstrate its strengths in performance against an argumentative baseline.


DayDreamer at CQs-Gen 2025: Generating Critical Questions through Argument Scheme Completion

Zhou, Wendi, Saadat-Yazdi, Ameer, Kökciyan, Nadin

arXiv.org Artificial Intelligence

Critical questions are essential resources to provoke critical thinking when encountering an argumentative text. We present our system for the Critical Questions Generation (CQs-Gen) Shared Task at ArgMining 2025. Our approach leverages large language models (LLMs) with chain-of-thought prompting to generate critical questions guided by Walton's argumentation schemes. For each input intervention, we conversationally prompt LLMs to instantiate the corresponding argument scheme template to first obtain structured arguments, and then generate relevant critical questions. Following this, we rank all the available critical questions by prompting LLMs to select the top 3 most helpful questions based on the original intervention text. This combination of structured argumentation theory and step-by-step reasoning enables the generation of contextually relevant and diverse critical questions. Our pipeline achieves competitive performance in the final test set, showing its potential to foster critical thinking given argumentative text and detect missing or uninformed claims. Code available at \href{https://git.ecdf.ed.ac.uk/s2236454/DayDreamer-CQs-Gen}{DayDreamer}.


NLAS-multi: A Multilingual Corpus of Automatically Generated Natural Language Argumentation Schemes

Ruiz-Dolz, Ramon, Taverner, Joaquin, Lawrence, John, Reed, Chris

arXiv.org Artificial Intelligence

Some of the major limitations identified in the areas of argument mining, argument generation, and natural language argument analysis are related to the complexity of annotating argumentatively rich data, the limited size of these corpora, and the constraints that represent the different languages and domains in which these data is annotated. To address these limitations, in this paper we present the following contributions: (i) an effective methodology for the automatic generation of natural language arguments in different topics and languages, (ii) the largest publicly available corpus of natural language argumentation schemes, and (iii) a set of solid baselines and fine-tuned models for the automatic identification of argumentation schemes.


Consolidating Strategies for Countering Hate Speech Using Persuasive Dialogues

Saha, Sougata, Srihari, Rohini

arXiv.org Artificial Intelligence

Hateful comments are prevalent on social media platforms. Although tools for automatically detecting, flagging, and blocking such false, offensive, and harmful content online have lately matured, such reactive and brute force methods alone provide short-term and superficial remedies while the perpetrators persist. With the public availability of large language models which can generate articulate synthetic and engaging content at scale, there are concerns about the rapid growth of dissemination of such malicious content on the web. There is now a need to focus on deeper, long-term solutions that involve engaging with the human perpetrator behind the source of the content to change their viewpoint or at least bring down the rhetoric using persuasive means. To do that, we propose defining and experimenting with controllable strategies for generating counter-arguments to hateful comments in online conversations. We experiment with controlling response generation using features based on (i) argument structure and reasoning-based Walton argument schemes, (ii) counter-argument speech acts, and (iii) human characteristics-based qualities such as Big-5 personality traits and human values. Using automatic and human evaluations, we determine the best combination of features that generate fluent, argumentative, and logically sound arguments for countering hate. We further share the developed computational models for automatically annotating text with such features, and a silver-standard annotated version of an existing hate speech dialog corpora.


ArgU: A Controllable Factual Argument Generator

Saha, Sougata, Srihari, Rohini

arXiv.org Artificial Intelligence

Effective argumentation is essential towards a purposeful conversation with a satisfactory outcome. For example, persuading someone to reconsider smoking might involve empathetic, well founded arguments based on facts and expert opinions about its ill-effects and the consequences on one's family. However, the automatic generation of high-quality factual arguments can be challenging. Addressing existing controllability issues can make the recent advances in computational models for argument generation a potential solution. In this paper, we introduce ArgU: a neural argument generator capable of producing factual arguments from input facts and real-world concepts that can be explicitly controlled for stance and argument structure using Walton's argument scheme-based control codes. Unfortunately, computational argument generation is a relatively new field and lacks datasets conducive to training. Hence, we have compiled and released an annotated corpora of 69,428 arguments spanning six topics and six argument schemes, making it the largest publicly available corpus for identifying argument schemes; the paper details our annotation and dataset creation framework. We further experiment with an argument generation strategy that establishes an inference strategy by generating an ``argument template'' before actual argument generation. Our results demonstrate that it is possible to automatically generate diverse arguments exhibiting different inference patterns for the same set of facts by using control codes based on argument schemes and stance.


Resolving Open-textured Rules with Templated Interpretive Arguments

Licato, John, Fields, Logan, Marji, Zaid

arXiv.org Artificial Intelligence

Open-textured terms in written rules are typically settled through interpretive argumentation. Ongoing work has attempted to catalogue the schemes used in such interpretive argumentation. But how can the use of these schemes affect the way in which people actually use and reason over the proper interpretations of open-textured terms? Using the interpretive argument-eliciting game Aporia as our framework, we carried out an empirical study to answer this question. Differing from previous work, we did not allow participants to argue for interpretations arbitrarily, but to only use arguments that fit with a given set of interpretive argument templates. Finally, we analyze the results captured by this new dataset, specifically focusing on practical implications for the development of interpretation-capable artificial reasoners.


Using Argumentation Schemes to Model Legal Reasoning

Bench-Capon, Trevor, Atkinson, Katie

arXiv.org Artificial Intelligence

Reasoning with legal cases, especially as conducted in common law jurisdictions such as the UK and USA, is a form of argumentation much studied in Artificial Intelligence and in computational argumentation. The formal procedure within which it is conducted and the extensive documentation which records the argument presented for each side and an assessment of these arguments make it a fruitful area for study. As described in [35], there may be several types of reasoning involved, including the use of rules, the balancing of factors, analogy and the use of policies to achieve particular purposes. All of these have been modelled in AI and Law, and this work suggests that reasoning with legal cases can been seen as going through a series of stages at which different reasoning styles are appropriate. This view will be elaborated in Section 2. One way of modelling a reasoning task [24] is to present it as a set of argumentation schemes [38]. In this paper we will use this method to articulate the reasoning required at each of the stages. Although legal reasoning is worthy of study in itself, we believe that the insights are also applicable to other, less formal, domains where it is necessary to balance reasons for and against particular options to come to a decision.


Argument Schemes and Dialogue for Explainable Planning

Mahesar, Quratul-ain, Parsons, Simon

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) is being increasingly deployed in practical applications. However, there is a major concern whether AI systems will be trusted by humans. In order to establish trust in AI systems, there is a need for users to understand the reasoning behind their solutions. Therefore, systems should be able to explain and justify their output. In this paper, we propose an argument scheme-based approach to provide explanations in the domain of AI planning. We present novel argument schemes to create arguments that explain a plan and its key elements; and a set of critical questions that allow interaction between the arguments and enable the user to obtain further information regarding the key elements of the plan. Furthermore, we present a novel dialogue system using the argument schemes and critical questions for providing interactive dialectical explanations.


Critical Thinking for Language Models

Betz, Gregor

arXiv.org Artificial Intelligence

This paper takes a first step towards a critical thinking curriculum for neural auto-regressive language models. We introduce a synthetic text corpus of deductively valid arguments, and use this artificial argument corpus to train and evaluate GPT-2. Significant transfer learning effects can be observed: Training a model on a few simple core schemes allows it to accurately complete conclusions of different, and more complex types of arguments, too. The language models seem to connect and generalize the core argument schemes in a correct way. Moreover, we obtain consistent and promising results for the GLUE and SNLI benchmarks. The findings suggest that there might exist a representative sample of paradigmatic instances of good reasoning that will suffice to acquire general reasoning skills and that might form the core of a critical thinking curriculum for language models.


Argument Schemes for Explainable Planning

Mahesar, Quratul-ain, Parsons, Simon

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) is being increasingly used to develop systems that produce intelligent solutions. However, there is a major concern that whether the systems built will be trusted by humans. In order to establish trust in AI systems, there is a need for the user to understand the reasoning behind their solutions and therefore, the system should be able to explain and justify its output. In this paper, we use argumentation to provide explanations in the domain of AI planning. We present argument schemes to create arguments that explain a plan and its components; and a set of critical questions that allow interaction between the arguments and enable the user to obtain further information regarding the key elements of the plan. Finally, we present some properties of the plan arguments.