How Large Language Models Need Symbolism

Deng, Xiaotie, Li, Hanyu

arXiv.org Artificial Intelligence 

Advances in artificial intelligence (AI), particularly large language models (LLMs) [1], have achieved remarkable success. This progress stems from "scaling laws" -- performance improves with greater computation, data, and model size [2]. They now excel at mathematics, medical, legal, and coding exams and competitions. Y et, this paradigm has a crucial vulnerability: scaling laws are effective only when data is abundant. Human reasoning, which relies on logical operations and abstractions rather than brute-force pattern matching on vast data, proves critical in tackling complex frontier domains, where usable data is often inherently scarce.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found