undecidability
Computational Irreducibility as the Foundation of Agency: A Formal Model Connecting Undecidability to Autonomous Behavior in Complex Systems
The concepts of agency and autonomy, pertaining to a system's ability to function effectively, pursue objectives, and self-regulate, are central inquiries across biology, cognitive science, artificial intelligence, and philosophy [1, 2]. Furthermore, previous formalization attempts, largely focused on logical or probabilistic frameworks, have frequently overlooked the inherent limitations imposed by computational constraints on a system's capacity to process information and forecast its environment [6, 7]. This article posits that a deeper understanding of agency can be achieved by examining the fundamental constraints of computation and logic within complex systems. Building upon insights from G odel's incompleteness theorems [8], T uring's work on decidability and com-putability [9], and concepts from thermodynamics and information theory, we formulate a novel explanation. Our core thesis is that genuine autonomy necessarily implies unpredictability from an external perspective: for any truly autonomous system, there exist questions about its future behavior that are fundamentally undecidable. This provides a principled distinction between autonomous and non-autonomous systems without appealing to non-physical properties. We contend that agency specifically emerges in systems operating at the threshold of decidability. Here, G odel-like constraints manifest as the system's inability to internally represent complete 1
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Netherlands > South Holland > Dordrecht (0.04)
- Asia > Middle East > Iran (0.04)
Systemic Constraints of Undecidability
This paper presents a theory of systemic undecidability, reframing incomputability as a structural property of systems rather than a localized feature of specific functions or problems. We define a notion of causal embedding and prove a closure principle: any subsystem that participates functionally in the computation of an undecidable system inherits its undecidability. This result positions undecidability as a pervasive constraint on prediction, modeling, and epistemic access in both natural and artificial systems. Our framework disarms oracle mimicry and challenges the view that computational limits can be circumvented through architectural innovation. By generalizing classical results into a dynamic systems context, this work augments the logical trajectory of Gödel, Turing, and Chaitin, offering a new perspective of the topology of computability and its interrelation to the boundaries of scientific knowledge.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > Arizona (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
LLMs Will Always Hallucinate, and We Need to Live With This
Banerjee, Sourav, Agarwal, Ayushi, Singla, Saloni
As Large Language Models become more ubiquitous across domains, it becomes important to examine their inherent limitations critically. This work argues that hallucinations in language models are not just occasional errors but an inevitable feature of these systems. We demonstrate that hallucinations stem from the fundamental mathematical and logical structure of LLMs. It is, therefore, impossible to eliminate them through architectural improvements, dataset enhancements, or factchecking mechanisms. Our analysis draws on computational theory and Gödel's First Incompleteness Theorem, which references the undecidability of problems like the Halting, Emptiness, and Acceptance Problems. We demonstrate that every stage of the LLM process--from training data compilation to fact retrieval, intent classification, and text generation--will have a non-zero probability of producing hallucinations. This work introduces the concept of "Structural Hallucinations" as an intrinsic nature of these systems. By establishing the mathematical certainty of hallucinations, we challenge the prevailing notion that they can be fully mitigated.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (2 more...)
- Research Report (1.00)
- Overview (1.00)
- Law (1.00)
- Health & Medicine (1.00)
From Undecidability of Non-Triviality and Finiteness to Undecidability of Learnability
Machine learning researchers and practitioners steadily enlarge the multitude of successful learning models. They achieve this through in-depth theoretical analyses and experiential heuristics. However, there is no known general-purpose procedure for rigorously evaluating whether newly proposed models indeed successfully learn from data. We show that such a procedure cannot exist. For PAC binary classification, uniform and universal online learning, and exact learning through teacher-learner interactions, learnability is in general undecidable, both in the sense of independence of the axioms in a formal system and in the sense of uncomputability. Our proofs proceed via computable constructions that encode the consistency problem for formal systems and the halting problem for Turing machines into whether certain function classes are trivial/finite or highly complex, which we then relate to whether these classes are learnable via established characterizations of learnability through complexity measures. Our work shows that undecidability appears in the theoretical foundations of artificial intelligence: There is no one-size-fits-all algorithm for deciding whether a machine learning model can be successful. We cannot in general automatize the process of assessing new learning models.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (8 more...)
On characterizations of learnability with computable learners
We study computable PAC (CPAC) learning as introduced by Agarwal et al. (2020). First, we consider the main open question of finding characterizations of proper and improper CPAC learning. We give a characterization of a closely related notion of strong CPAC learning, and provide a negative answer to the COLT open problem posed by Agarwal et al. (2021) whether all decidably representable VC classes are improperly CPAC learnable. Second, we consider undecidability of (computable) PAC learnability. We give a simple general argument to exhibit such undecidability, and initiate a study of the arithmetical complexity of learnability. We briefly discuss the relation to the undecidability result of Ben-David et al. (2019), that motivated the work of Agarwal et al.
- North America > United States > California > San Diego County > San Diego (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (3 more...)
Rudolph
Recently, the field of knowledge representation is drawing a lot of inspiration from database theory. In particular, in the area of description logics and ontology languages, interest has shifted from satisfiability checking to query answering, with various query notions adopted from databases, like (unions of) conjunctive queries or different kinds of path queries. Likewise, the finite model semantics is being established as a viable and interesting alternative to the traditional semantics based on unrestricted models. In this paper, we investigate diverse database-inspired reasoning problems for very expressive description logics (all featuring the worrisome trias of inverses, counting, and nominals) which have in common that role paths of unbounded length can be described (in the knowledge base or of the query), leading to a certain non-locality of the reasoning problem. We show that for all the cases considered, undecidability can be established by very similar means. Most notably, we show undecidability of finite entailment of unions of conjunctive queries for a fragment of SHOIQ (the logic underlying the OWL DL ontology language), and undecidability of finite entailment of conjunctive queries for a fragment of SROIQ (the logical basis of the more recent and popular OWL 2 DL standard).
Undecidability of Underfitting in Learning Algorithms
Sehra, Sonia, Flores, David, Montanez, George D.
Using recent machine learning results that present an information-theoretic perspective on underfitting and overfitting, we prove that deciding whether an encodable learning algorithm will always underfit a dataset, even if given unlimited training time, is undecidable. We discuss the importance of this result and potential topics for further research, including information-theoretic and probabilistic strategies for bounding learning algorithm fit.
- North America > United States > California > Los Angeles County > Claremont (0.05)
- Europe > Middle East > Malta > Port Region > Southern Harbour District > Valletta (0.05)
- North America > United States > Washington > King County > Redmond (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
Decidability of Sample Complexity of PAC Learning in finite setting
In this short note we observe that the sample complexity of PAC machine learning of various concepts, including learning the maximum (EMX), can be exactly determined when the support of the probability measures considered as models satisfies an a-priori bound. This result contrasts with the recently discovered undecidability of EMX within ZFC for finitely supported probabilities (with no a priori bound). Unfortunately, the decision procedure is at present, at least doubly exponential in the number of points times the uniform bound on the support size.
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.15)
- North America > United States > New York (0.04)
- North America > United States > New Jersey > Hudson County > Secaucus (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
The politics of artificial intelligence: an interview with Louise Amoore
Krystian Woznicki (KW):'Rethinking political agency in an AI-driven world' is the topic of the AMBIENT REVOLTS conference in Berlin on 8–10 November. I would therefore like to begin by asking you about the deployment of algorithms at state borders. You have noted that'in order to learn, to change daily and evolve [they] require precisely the circulations and mobilities that pass through'. This observation is part of your larger argument about how governmentality is less concerned with prohibiting movement than with facilitating it in productive ways. The role of self-learning algorithms would seem to be very significant in this context, since – like capitalism – they also hinge upon movement. When it comes to their thirst for traffic, how do you think that relationship between self-learning algorithms and capitalism? Louise Amoore (LA): Yes, I agree that the role of'self learning' or semi-supervised algorithms is of the utmost relevance in understanding how movement and circulation matters.
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Government (1.00)
- Law (0.94)
PAC-learning is Undecidable
Venkatraman, Sairaam, Balasubramanian, S, Sarma, R Raghunatha
The problem of attempting to learn the mapping between data and labels is the crux of any machine learning task. It is, therefore, of interest to the machine learning community on practical as well as theoretical counts to consider the existence of a test or criterion for deciding the feasibility of attempting to learn. We investigate the existence of such a criterion in the setting of PAC-learning, basing the feasibility solely on whether the mapping to be learnt lends itself to approximation by a given class of hypothesis functions. We show that no such criterion exists, exposing a fundamental limitation in the decidability of learning. In other words, we prove that testing for PAC-learnability is undecidable in the Turing sense. We also briefly discuss some of the probable implications of this result to the current practice of machine learning.
- North America > United States (0.04)
- Asia > India > Andhra Pradesh (0.04)