A Categorical Analysis of Large Language Models and Why LLMs Circumvent the Symbol Grounding Problem
Floridi, Luciano, Jia, Yiyang, Tohmé, Fernando
–arXiv.org Artificial Intelligence
This paper presents a formal, categorical framework for analysing how humans and large language models (LLMs) transform content into truth-evaluated propositions about a state space of possible worlds W , in order to argue that LLMs do not solve but circumvent the symbol grounding problem.
arXiv.org Artificial Intelligence
Dec-11-2025
- Country:
- Asia > Japan
- Honshū > Kantō
- Kanagawa Prefecture > Yokohama (0.04)
- Tokyo Metropolis Prefecture > Tokyo (0.04)
- Honshū > Kantō
- Europe
- France (0.05)
- Italy > Emilia-Romagna
- Metropolitan City of Bologna > Bologna (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Oxfordshire > Oxford (0.04)
- North America
- Canada (0.14)
- United States
- Connecticut > New Haven County
- New Haven (0.04)
- Illinois (0.04)
- Minnesota (0.04)
- Connecticut > New Haven County
- Oceania > Australia
- Australian Capital Territory > Canberra (0.04)
- Asia > Japan
- Genre:
- Research Report > New Finding (0.67)
- Technology: