How Large Language Models Need Symbolism
–arXiv.org Artificial Intelligence
Advances in artificial intelligence (AI), particularly large language models (LLMs) [1], have achieved remarkable success. This progress stems from "scaling laws" -- performance improves with greater computation, data, and model size [2]. They now excel at mathematics, medical, legal, and coding exams and competitions. Y et, this paradigm has a crucial vulnerability: scaling laws are effective only when data is abundant. Human reasoning, which relies on logical operations and abstractions rather than brute-force pattern matching on vast data, proves critical in tackling complex frontier domains, where usable data is often inherently scarce.
arXiv.org Artificial Intelligence
Sep-29-2025
- Country:
- Asia > China (0.05)
- North America > United States (0.05)
- Genre:
- Research Report (0.40)
- Technology: