Optimizing Social Media Annotation of HPV Vaccine Skepticism and Misinformation Using Large Language Models: An Experimental Evaluation of In-Context Learning and Fine-Tuning Stance Detection Across Multiple Models
Sun, Luhang, Pendyala, Varsha, Chuang, Yun-Shiuan, Yang, Shanglin, Feldman, Jonathan, Zhao, Andrew, De Choudhury, Munmun, Yang, Sijia, Shah, Dhavan
–arXiv.org Artificial Intelligence
This paper leverages large-language models (LLMs) to experimentally determine optimal strategies for scaling up social media content annotation for stance detection on HPV vaccine-related tweets. We examine both conventional fine-tuning and emergent in-context learning methods, systematically varying strategies of prompt engineering across widely used LLMs and their variants (e.g., GPT4, Mistral, and Llama3, etc.). Specifically, we varied prompt template design, shot sampling methods, and shot quantity to detect stance on HPV vaccination. Our findings reveal that 1) in general, in-context learning outperforms fine-tuning in stance detection for HPV vaccine social media content; 2) increasing shot quantity does not necessarily enhance performance across models; and 3) different LLMs and their variants present differing sensitivity to in-context learning conditions. We uncovered that the optimal in-context learning configuration for stance detection on HPV vaccine tweets involves six stratified shots paired with detailed contextual prompts. This study highlights the potential and provides an applicable approach for applying LLMs to research on social media stance and skepticism detection.
arXiv.org Artificial Intelligence
Nov-21-2024
- Country:
- North America > United States (0.28)
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Health & Medicine > Therapeutic Area
- Immunology (1.00)
- Infections and Infectious Diseases (1.00)
- Vaccines (1.00)
- Health & Medicine > Therapeutic Area
- Technology: