Are LLMs Good Zero-Shot Fallacy Classifiers?
Pan, Fengjun, Wu, Xiaobao, Li, Zongrui, Luu, Anh Tuan
–arXiv.org Artificial Intelligence
Fallacies are defective arguments with faulty reasoning. Detecting and classifying them is a crucial NLP task to prevent misinformation, manipulative claims, and biased decisions. However, existing fallacy classifiers are limited by the requirement for sufficient labeled data for training, which hinders their out-of-distribution (OOD) generalization abilities. In this paper, we focus on leveraging Large Language Models (LLMs) for zero-shot fallacy classification. To elicit fallacy-related knowledge and reasoning abilities of LLMs, we propose diverse single-round and multi-round prompting schemes, applying different task-specific instructions such as extraction, summarization, and Chain-of-Thought reasoning. With comprehensive experiments on benchmark datasets, we suggest that LLMs could be potential zero-shot fallacy classifiers. In general, LLMs under single-round prompting schemes have achieved acceptable zero-shot performances compared to the best full-shot baselines and can outperform them in all OOD inference scenarios and some open-domain tasks. Our novel multi-round prompting schemes can effectively bring about more improvements, especially for small LLMs. Our analysis further underlines the future research on zero-shot fallacy classification. Codes and data are available at: https://github.com/panFJCharlotte98/Fallacy_Detection.
arXiv.org Artificial Intelligence
Oct-19-2024
- Country:
- Asia > Middle East (1.00)
- Europe (0.92)
- Genre:
- Research Report (0.50)
- Industry:
- Government (1.00)
- Health & Medicine > Therapeutic Area
- Immunology (0.46)
- Media > News (0.48)
- Technology: