falsifiability
The Theory of the Unique Latent Pattern: A Formal Epistemic Framework for Structural Singularity in Complex Systems
This paper introduces the Theory of the Unique Latent Pattern (ULP), a formal epistemic framework that redefines the origin of apparent complexity in dynamic systems. Rather than attributing unpredictability to intrinsic randomness or emergent nonlinearity, ULP asserts that every analyzable system is governed by a structurally unique, deterministic generative mechanism, one that remains hidden not due to ontological indeterminacy, but due to epistemic constraints. The theory is formalized using a non-universal generative mapping \( \mathcal{F}_S(P_S, t) \), where each system \( S \) possesses its own latent structure \( P_S \), irreducible and non-replicable across systems. Observed irregularities are modeled as projections of this generative map through observer-limited interfaces, introducing epistemic noise \( \varepsilon_S(t) \) as a measure of incomplete access. By shifting the locus of uncertainty from the system to the observer, ULP reframes chaos as a context-relative failure of representation. We contrast this position with foundational paradigms in chaos theory, complexity science, and statistical learning. While they assume or model shared randomness or collective emergence, ULP maintains that every instance harbors a singular structural identity. Although conceptual, the theory satisfies the criterion of falsifiability in the Popperian sense, it invites empirical challenge by asserting that no two systems governed by distinct latent mechanisms will remain indistinguishable under sufficient resolution. This opens avenues for structurally individuated models in AI, behavioral inference, and epistemic diagnostics.
- Asia > Malaysia (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
Degrees of riskiness, falsifiability, and truthlikeness. A neo-Popperian account applicable to probabilistic theories
Vignero, Leander, Wenmackers, Sylvia
In this paper, we take a fresh look at three Popperian concepts: riskiness, falsifiability, and truthlikeness (or verisimilitude) of scientific hypotheses or theories. First, we make explicit the dimensions that underlie the notion of riskiness. Secondly, we examine if and how degrees of falsifiability can be defined, and how they are related to various dimensions of the concept of riskiness as well as the experimental context. Thirdly, we consider the relation of riskiness to (expected degrees of) truthlikeness. Throughout, we pay special attention to probabilistic theories and we offer a tentative, quantitative account of verisimilitude for probabilistic theories. "Modern logic, as I hope is now evident, has the effect of enlarging our abstract imagination, and providing an infinite number of possible hypotheses to be applied in the analysis of any complex fact." A theory is falsifiable if it allows us to deduce predictions that can be compared to evidence, which according ...
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Europe > Netherlands > South Holland > Dordrecht (0.04)
- (5 more...)
How Karl Popper can make you as good a data scientist as George Soros
The Tao of Data Science column explores how centuries of philosophers have been tackling the key problems of machine learning and data science. Karl Popper is best known for the view that science proceeds by "falsifiability" -- the idea that one cannot prove a hypothesis is true, or even have evidence of truth by induction (yikes!), but one can refute a hypothesis if it is false. Suppose Popper was a modern data scientist and needed to implement a machine learning solution to predict some phenomenon of interest. Given his philosophy of science, how would he have proceeded to implement his model? Popper would implement a causal model.
The Social-Emotional Turing Challenge
Social-emotional intelligence is an essential part of being a competent human and is thus required for humanlevel AI. When considering alternatives to the Turing test it is therefore a capacity that is important to test. We characterize this capacity as affective theory of mind and describe some unique challenges associated with its interpretive or generative nature. Mindful of these challenges we describe a five-step method along with preliminary investigations into its application. We also describe certain characteristics of the approach such as its incremental nature, and countermeasures that make it difficult to game or cheat.
The Social-Emotional Turing Challenge
Jarrold, William (Nuance Communications) | Yeh, Peter Z. (Nuance Communications)
Social-emotional intelligence is an essential part of being a competent human and is thus required for human-level AI. When considering alternatives to the Turing Test it is therefore a capacity that is important to test. We characterize this capacity as affective theory of mind and describe some unique challenges associated with its interpretive or generative nature. Mindful of these challenges we describe a five-step method along with preliminary investigations into its application. We also describe certain characteristics of the approach such as its incremental nature, and countermeasures that make it difficult to game or cheat.
- North America > United States > Texas > Travis County > Austin (0.14)
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
- North America > United States > New York (0.04)
- (3 more...)
- Education (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (0.67)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Commonsense Reasoning (0.68)
- Information Technology > Artificial Intelligence > Cognitive Science > Emotion (0.67)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.48)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (0.46)
Falsifiable implies Learnable
To what extent are theory-based predictions justified by prior observations? The question is known as the problem of induction and is fundamental to scientific inference. We address the problem of induction from the perspective of learning theory. That is, we consider which theories, and under what assumptions, can be applied to make optimal predictions. Our main result is that the more hypotheses a theory falsifies, suitably quantified, the closer the predictive performance of the best strategy (based on the theory) will be to the theory's post hoc explanatory performance on observed data.