A Categorical Analysis of Large Language Models and Why LLMs Circumvent the Symbol Grounding Problem
Floridi, Luciano, Jia, Yiyang, Tohmé, Fernando
–arXiv.org Artificial Intelligence
This paper presents a formal, categorical framework for analysing how humans and large language models (LLMs) transform content into truth-evaluated propositions about a state space of possible worlds W , in order to argue that LLMs do not solve but circumvent the symbol grounding problem.
arXiv.org Artificial Intelligence
Dec-11-2025
- Country:
- Asia > Japan
- Europe (0.47)
- North America > United States (0.28)
- Genre:
- Research Report > New Finding (0.67)
- Technology: