Does Thought Require Sensory Grounding? From Pure Thinkers to Large Language Models
–arXiv.org Artificial Intelligence
Does Thought Require Sensory Grounding? Presidential Address delivered under the title "Can a Large Language Model Think?" at the one hundred nineteenth Eastern Division meeting of the American Philosophical Association on January 6, 2023. Does the capacity to think require the capacity to sense? A lively debate on this topic runs throughout the history of philosophy and now animates discussions of artificial intelligence. In favor of a positive answer, Aristotle says, "The soul never thinks without an image." Aquinas says, "There's nothing in the intellect that wasn't previously in the senses." Hume says, "All our simple ideas in their first appearance are derived from simple impressions." With some minimal assumptions, all three of these statements suggest that thinking requires the capacity to sense, or at least requires having had the capacity to sense at some point. Contrasting with these empiricist theses, rationalist philosophers have often denied that thinking requires sensing. Plato holds that we can think about the forms before we have senses and a body. Descartes holds that the pure intellect thinks independently of the senses. Navigating between empiricism and rationalism, Kant discusses the issue extensively ("Thoughts without content are empty"); unsurprisingly, his final views on the matter are complicated. In recent decades, this philosophical debate has become central to debates in artificial intelligence and cognitive science. He and others held that for symbols to have meaning, they must be causally grounded in sensory connections to the environment. To be meaningful, the symbol "RED" must be grounded in seeing red. The symbol "WATER" must be grounded in a sensory connection to water. If we assume that thinking and meaning go together in AI systems, then this amounts to another version of the thesis that thinking requires sensing. In the last few years, discussion of symbol grounding has become especially widespread in the debate over large language models (LLMs) such as the GPT systems. Can large language models think, mean, or understand?
arXiv.org Artificial Intelligence
Aug-18-2024
- Country:
- Europe > United Kingdom
- England
- Cambridgeshire > Cambridge (0.04)
- Oxfordshire > Oxford (0.04)
- England
- North America
- Canada > Quebec
- Montreal (0.04)
- United States
- Hawaii > Honolulu County
- Honolulu (0.04)
- Minnesota (0.04)
- New York (0.04)
- Hawaii > Honolulu County
- Canada > Quebec
- Europe > United Kingdom
- Genre:
- Research Report (0.50)
- Industry:
- Education (0.46)
- Health & Medicine (0.46)
- Technology: