AMELIA: A Family of Multi-task End-to-end Language Models for Argumentation
–arXiv.org Artificial Intelligence
Argument mining is a subfield of argumentation that aims to automatically extract argumentative structures and their relations from natural language texts. This paper investigates how a single large language model can be leveraged to perform one or several argument mining tasks. Our contributions are two-fold. First, we construct a multi-task dataset by surveying and converting 19 well-known argument mining datasets from the literature into a unified format. Second, we explore various training strategies using Meta AI's Llama-3.1-8B-Instruct model: (1) fine-tuning on individual tasks, (2) fine-tuning jointly on multiple tasks, and (3) merging models fine-tuned separately on individual tasks. Our experiments show that task-specific fine-tuning significantly improves individual performance across all tasks. Moreover, multi-task fine-tuning maintains strong performance without degradation, suggesting effective transfer learning across related tasks. Finally, we demonstrate that model merging offers a viable compromise: it yields competitive performance while mitigating the computational costs associated with full multi-task fine-tuning.
arXiv.org Artificial Intelligence
Aug-26-2025
- Country:
- Europe
- Czechia > Prague (0.04)
- France (0.04)
- Germany (0.04)
- Italy > Umbria
- Perugia Province > Perugia (0.04)
- Portugal > Lisbon
- Lisbon (0.04)
- United Kingdom > England
- Merseyside > Liverpool (0.04)
- North America
- Canada (0.04)
- United States > Florida
- Miami-Dade County > Miami (0.04)
- Europe
- Genre:
- Research Report (1.00)
- Industry:
- Government (0.93)
- Law (0.93)
- Technology: