The Impact of Role Design in In-Context Learning for Large Language Models
Rouzegar, Hamidreza, Makrehchi, Masoud
–arXiv.org Artificial Intelligence
In-context learning (ICL) enables Large Language Models (LLMs) to generate predictions based on prompts without additional fine-tuning. While prompt engineering has been widely studied, the impact of role design within prompts remains underexplored. This study examines the influence of role configurations in zero-shot and few-shot learning scenarios using GPT-3.5 and GPT-4o from OpenAI and Llama2-7b and Llama2-13b from Meta. We evaluate the models' performance across datasets, focusing on tasks like sentiment analysis, text classification, question answering, and math reasoning. Our findings suggest the potential of role-based prompt structuring to enhance LLM performance.
arXiv.org Artificial Intelligence
Sep-30-2025
- Country:
- North America
- Canada > Ontario
- Durham Region > Oshawa (0.04)
- United States > Minnesota
- Hennepin County > Minneapolis (0.14)
- Canada > Ontario
- North America
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Leisure & Entertainment (1.00)
- Media > Film (1.00)
- Technology: