Detailed balance in large language model-driven agents
Song, Zhuo-Yang, Cao, Qing-Hong, Luo, Ming-xing, Zhu, Hua Xing
–arXiv.org Artificial Intelligence
Large language model (LLM)-driven agents are emerging as a powerful new paradigm for solving complex problems. Despite the empirical success of these practices, a theoretical framework to understand and unify their macroscopic dynamics remains lacking. This Letter proposes a method based on the least action principle to estimate the underlying generative directionality of LLMs embedded within agents. By experimentally measuring the transition probabilities between LLM-generated states, we statistically discover a detailed balance in LLM-generated transitions, indicating that LLM generation may not be achieved by generally learning rule sets and strategies, but rather by implicitly learning a class of underlying potential functions that may transcend different LLM architectures and prompt templates. To our knowledge, this is the first discovery of a macroscopic physical law in LLM generative dynamics that does not depend on specific model details. This work is an attempt to establish a macroscopic dynamics theory of complex AI systems, aiming to elevate the study of AI agents from a collection of engineering practices to a science built on effective measurements that are predictable and quantifiable.
arXiv.org Artificial Intelligence
Dec-12-2025
- Country:
- Asia
- China > Beijing
- Beijing (0.04)
- Middle East > Republic of Türkiye (0.04)
- Singapore (0.04)
- China > Beijing
- Europe > United Kingdom
- England > Oxfordshire > Oxford (0.04)
- North America > United States
- Massachusetts > Middlesex County
- Cambridge (0.04)
- New York > New York County
- New York City (0.04)
- Massachusetts > Middlesex County
- Asia
- Genre:
- Research Report (0.64)
- Technology: