Implicature in Interaction: Understanding Implicature Improves Alignment in Human-LLM Interaction
Hota, Asutosh, Jokinen, Jussi P. P.
–arXiv.org Artificial Intelligence
The rapid advancement of Large Language Models (LLMs) is positioning language at the core of human-computer interaction (HCI). We argue that advancing HCI requires attention to the linguistic foundations of interaction, particularly implicature (meaning conveyed beyond explicit statements through shared context) which is essential for human-AI (HAI) alignment. This study examines LLMs' ability to infer user intent embedded in context-driven prompts and whether understanding implicature improves response generation. Results show that larger models approximate human interpretations more closely, while smaller models struggle with implicature inference. Furthermore, implicature-based prompts significantly enhance the perceived relevance and quality of responses across models, with notable gains in smaller models. Overall, 67.6% of participants preferred responses with implicature-embedded prompts to literal ones, highlighting a clear preference for contextually nuanced communication. Our work contributes to understanding how linguistic theory can be used to address the alignment problem by making HAI interaction more natural and contextually grounded.
arXiv.org Artificial Intelligence
Oct-30-2025
- Country:
- Asia
- Europe
- Finland > Central Finland
- Jyväskylä (0.04)
- United Kingdom > England
- Oxfordshire > Oxford (0.04)
- Finland > Central Finland
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (1.00)
- Research Report
- Industry:
- Health & Medicine > Therapeutic Area (0.34)
- Technology: