Large Language Models as symbolic DNA of cultural dynamics

Pourdavood, Parham, Jacob, Michael, Deacon, Terrence

arXiv.org Artificial Intelligence 

Although the recent wave of AI models, known as Large Language Models (LLMs), are seamlessly surpassing the Turing Test, this milestone has been overshadowed by their rapid commercialization and the profound ways they are already reshaping society. The pursuit of Artificial General Intelligence (AGI)--commonly defined as human-level intelligence--is touted as the next major milestone. Yet whether the continued progress within the current framework could ever lead to agency and meaning at the scale of AI itself remains an open and contested question. Critics argue that current LLMs operate through algorithmic mimicry, that is simulating intelligent behavior without embodying the principles behind it (Jaeger, 2024; Jaeger et al., 2024) . Artificial Neural Networks--the main framework behind LLMs--operate on behaviorist assumptions: a framework that focuses exclusively on observable input-output patterns while treating internal states as part of a "black box" to be optimized (Brooks, 1991; Sutton & Barto, 2015) . This does not mean LLMs do not have sophisticated engineering, but their structure is designed to optimize internal states based on input-output feedback loops. Even though the logic behind behaviorism is likely one of the key principles supporting an intelligent system, it likely is not sufficient for intelligence and is not what enables agency and intelligence in the first place (Dreyfus, 1992; Searle, 1980) . Furthermore, it would be naive to consider outward behavior of intelligence as having acquired intelligence or sentience since a good simulation can be powerful and convincing. To address such issues, alternative approaches grounded in organismal intelligence are emerging to instead explain the principles behind intelligence through intrinsic and goal-directed models of the body and its relationship to the environment (Deacon, 2012; Jacob, 2023; Jaeger et al., 2024; Levin, 2019; Roli et al., 2022; Varela et al., 1993; Watson, 2024) .