Ito, Takayuki
The Hidden Strength of Disagreement: Unraveling the Consensus-Diversity Tradeoff in Adaptive Multi-Agent Systems
Wu, Zengqing, Ito, Takayuki
Consensus formation is pivotal in multi-agent systems (MAS), balancing collective coherence with individual diversity. Conventional LLM-based MAS primarily rely on explicit coordination, e.g., prompts or voting, risking premature homogenization. We argue that implicit consensus, where agents exchange information yet independently form decisions via in-context learning, can be more effective in dynamic environments that require long-horizon adaptability. By retaining partial diversity, systems can better explore novel strategies and cope with external shocks. We formalize a consensus-diversity tradeoff, showing conditions where implicit methods outperform explicit ones. Experiments on three scenarios -- Dynamic Disaster Response, Information Spread and Manipulation, and Dynamic Public-Goods Provision -- confirm partial deviation from group norms boosts exploration, robustness, and performance. We highlight emergent coordination via in-context learning, underscoring the value of preserving diversity for resilient decision-making.
Theme Aspect Argumentation Model for Handling Fallacies
Arisaka, Ryuta, Nakai, Ryoma, Kawamoto, Yusuke, Ito, Takayuki
From daily discussions to marketing ads to political statements, information manipulation is rife. It is increasingly more important that we have the right set of tools to defend ourselves from manipulative rhetoric, or fallacies. Suitable techniques to automatically identify fallacies are being investigated in natural language processing research. However, a fallacy in one context may not be a fallacy in another context, so there is also a need to explain how and why it has come to be judged a fallacy. For the explainable fallacy identification, we present a novel approach to characterising fallacies through formal constraints, as a viable alternative to more traditional fallacy classifications by informal criteria. To achieve this objective, we introduce a novel context-aware argumentation model, the theme aspect argumentation model, which can do both: the modelling of a given argumentation as it is expressed (rhetorical modelling); and a deeper semantic analysis of the rhetorical argumentation model. By identifying fallacies with formal constraints, it becomes possible to tell whether a fallacy lurks in the modelled rhetoric with a formal rigour. We present core formal constraints for the theme aspect argumentation model and then more formal constraints that improve its fallacy identification capability. We show and prove the consequences of these formal constraints. We then analyse the computational complexities of deciding the satisfiability of the constraints.
Self-Agreement: A Framework for Fine-tuning Language Models to Find Agreement among Diverse Opinions
Ding, Shiyao, Ito, Takayuki
Finding an agreement among diverse opinions is a challenging topic in multiagent systems. Recently, large language models (LLMs) have shown great potential in addressing this challenge due to their remarkable capabilities in comprehending human opinions and generating human-like text. However, they typically rely on extensive human-annotated data. In this paper, we propose Self-Agreement, a novel framework for fine-tuning LLMs to autonomously find agreement using data generated by LLM itself. Specifically, our approach employs the generative pre-trained transformer-3 (GPT-3) to generate multiple opinions for each question in a question dataset and create several agreement candidates among these opinions. Then, a bidirectional encoder representations from transformers (BERT)-based model evaluates the agreement score of each agreement candidate and selects the one with the highest agreement score. This process yields a dataset of question-opinion-agreements, which we use to fine-tune a pre-trained LLM for discovering agreements among diverse opinions. Remarkably, a pre-trained LLM fine-tuned by our Self-Agreement framework achieves comparable performance to GPT-3 with only 1/25 of its parameters, showcasing its ability to identify agreement among various opinions without the need for human-annotated data.
Best-Answer Prediction in Q&A Sites Using User Information
Hadfi, Rafik, Moustafa, Ahmed, Yoshino, Kai, Ito, Takayuki
Community Question Answering (CQA) sites have spread and multiplied significantly in recent years. Sites like Reddit, Quora, and Stack Exchange are becoming popular amongst people interested in finding answers to diverse questions. One practical way of finding such answers is automatically predicting the best candidate given existing answers and comments. Many studies were conducted on answer prediction in CQA but with limited focus on using the background information of the questionnaires. We address this limitation using a novel method for predicting the best answers using the questioner's background information and other features, such as the textual content or the relationships with other participants. Our answer classification model was trained using the Stack Exchange dataset and validated using the Area Under the Curve (AUC) metric. The experimental results show that the proposed method complements previous methods by pointing out the importance of the relationships between users, particularly throughout the level of involvement in different communities on Stack Exchange. Furthermore, we point out that there is little overlap between user-relation information and the information represented by the shallow text features and the meta-features, such as time differences.
Relational Argumentation Semantics
Arisaka, Ryuta, Ito, Takayuki
In this paper, we propose a fresh perspective on argumentation semantics, to view them as a relational database. It offers encapsulation of the underlying argumentation graph, and allows us to understand argumentation semantics under a single, relational perspective, leading to the concept of relational argumentation semantics. This is a direction to understand argumentation semantics through a common formal language. We show that many existing semantics such as explanation semantics, multi-agent semantics, and more typical semantics, that have been proposed for specific purposes, are understood in the relational perspective.
A Robust Model for Trust Evaluation during Interactions between Agents in a Sociable Environment
Liang, Qin, Zhang, Minjie, Ren, Fenghui, Ito, Takayuki
Trust evaluation is an important topic in both research and applications in sociable environments. This paper presents a model for trust evaluation between agents by the combination of direct trust, indirect trust through neighbouring links and the reputation of an agent in the environment (i.e. social network) to provide the robust evaluation. Our approach is typology independent from social network structures and in a decentralized manner without a central controller, so it can be used in broad domains.
Privacy Information Classification: A Hybrid Approach
Wu, Jiaqi, Li, Weihua, Bai, Quan, Ito, Takayuki, Moustafa, Ahmed
A large amount of information has been published to online social networks every day. Individual privacy-related information is also possibly disclosed unconsciously by the end-users. Identifying privacy-related data and protecting the online social network users from privacy leakage turn out to be significant. Under such a motivation, this study aims to propose and develop a hybrid privacy classification approach to detect and classify privacy information from OSNs. The proposed hybrid approach employs both deep learning models and ontology-based models for privacy-related information extraction. Extensive experiments are conducted to validate the proposed hybrid approach, and the empirical results demonstrate its superiority in assisting online social network users against privacy leakage.
Abstract Interpretation in Formal Argumentation: with a Galois Connection for Abstract Dialectical Frameworks and May-Must Argumentation (First Report)
Arisaka, Ryuta, Ito, Takayuki
Labelling-based formal argumentation relies on labelling functions that typically assign one of 3 labels to indicate either acceptance, rejection, or else undecided-to-be-either, to each argument. While a classical labelling-based approach applies globally uniform conditions as to how an argument is to be labelled, they can be determined more locally per argument. Abstract dialectical frameworks (ADF) is a well-known argumentation formalism that belongs to this category, offering a greater labelling flexibility. As the size of an argumentation increases in the numbers of arguments and argument-to-argument relations, however, it becomes increasingly more costly to check whether a labelling function satisfies those local conditions or even whether the conditions are as per the intention of those who had specified them. Some compromise is thus required for reasoning about a larger argumentation. In this context, there is a more recently proposed formalism of may-must argumentation (MMA) that enforces still local but more abstract labelling conditions. We identify how they link to each other in this work. We prove that there is a Galois connection between them, in which ADF is a concretisation of MMA and MMA is an abstraction of ADF. We explore the consequence of abstract interpretation at play in formal argumentation, demonstrating a sound reasoning about the judgement of acceptability/rejectability in ADF from within MMA. As far as we are aware, there is seldom any work that incorporates abstract interpretation into formal argumentation in the literature, and, in the stated context, this work is the first to demonstrate its use and relevance.
Formulating Manipulable Argumentation with Intra-/Inter-Agent Preferences
Arisaka, Ryuta, Hagiwara, Makoto, Ito, Takayuki
From marketing to politics, exploitation of incomplete information through selective communication of arguments is ubiquitous. In this work, we focus on development of an argumentation-theoretic model for manipulable multi-agent argumentation, where each agent may transmit deceptive information to others for tactical motives. In particular, we study characterisation of epistemic states, and their roles in deception/honesty detection and (mis)trust-building. To this end, we propose the use of intra-agent preferences to handle deception/honesty detection and inter-agent preferences to determine which agent(s) to believe in more. We show how deception/honesty in an argumentation of an agent, if detected, would alter the agent's perceived trustworthiness, and how that may affect their judgement as to which arguments should be acceptable. 1 Introduction To adequately characterise multi-agent argumentation, it is important to model what an agent sees of other agents' argumentations ( Epistemic Aspect). It is also important to model how agents interact with others ( Agent-to-Agent Interaction). These two factors determine dynamics of multi-agent argumentation, and are thus central to: argumentation-based negotiations (Cf.
Automated Negotiating Agents Competition (ANAC)
Jonker, Catholijn M. (TU Delft) | Aydogan, Reyhan (Ozeygin University) | Baarslag, Tim (Centrum voor Wiskunde en Informatica) | Fujita, Katsuhide (Tokyo University of Agriculture and Technology) | Ito, Takayuki (Nagoya Institute of Technology) | Hindriks, Koen (TU Delft)
The annual International Automated Negotiating Agents Competition (ANAC) is used by the automated negotiation research community to benchmark and evaluate its work andto challenge itself. The benchmark problems and evaluation results and the protocols and strategies developed are available to the wider research community.