What Do Large Language Models Know? Tacit Knowledge as a Potential Causal-Explanatory Structure
–arXiv.org Artificial Intelligence
It is sometimes assumed that Large Language Models (LLMs) know language, or for example that they know that Paris is the capital of France. But what -- if anything -- do LLMs actually know? In this paper, I argue that LLMs can acquire tacit knowledge as defined by Martin Davies (1990). Whereas Davies himself denies that neural networks can acquire tacit knowledge, I demonstrate that certain architectural features of LLMs satisfy the constraints of semantic description, syntactic structure, and causal systematicity. Thus, tacit knowledge may serve as a conceptual framework for describing, explaining, and intervening on LLMs and their behavior.
arXiv.org Artificial Intelligence
Apr-17-2025
- Country:
- Asia > Middle East
- UAE > Abu Dhabi Emirate > Abu Dhabi (0.04)
- Europe
- France (0.25)
- Germany
- Bavaria > Middle Franconia
- Nuremberg (0.04)
- Berlin (0.04)
- Bavaria > Middle Franconia
- Netherlands
- North Brabant > Eindhoven (0.04)
- South Holland > Dordrecht (0.04)
- United Kingdom > England
- Oxfordshire > Oxford (0.14)
- North America > United States
- California > San Diego County
- San Diego (0.04)
- Massachusetts > Middlesex County
- Cambridge (0.14)
- New York (0.04)
- California > San Diego County
- Asia > Middle East
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine (0.46)
- Technology: