Implicit In-Context Learning: Evidence from Artificial Language Experiments
–arXiv.org Artificial Intelligence
Humans acquire language through implicit learning, absorbing complex patterns without explicit awareness. While LLMs demonstrate impressive linguistic capabilities, it remains unclear whether they exhibit human-like pattern recognition during in-context learning at inferencing level. We adapted three classic artificial language learning experiments spanning morphology, morphosyntax, and syntax to systematically evaluate implicit learning at inferencing level in two state-of-the-art OpenAI models: gpt-4o and o3-mini. Our results reveal linguistic domain-specific alignment between models and human behaviors, o3-mini aligns better in morphology while both models align in syntax.
arXiv.org Artificial Intelligence
Mar-31-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe > Monaco (0.04)
- North America > United States
- Asia > Middle East
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (1.00)
- Research Report
- Industry:
- Education > Curriculum > Subject-Specific Education (0.34)
- Technology: