Goto

Collaborating Authors

 category theory


From Euler to Today: Universal Mathematical Fallibility A Large-Scale Computational Analysis of Errors in ArXiv Papers

Rivin, Igor

arXiv.org Artificial Intelligence

We present the results of a large-scale computational analysis of mathematical papers from the ArXiv repository, demonstrating a comprehensive system that not only detects mathematical errors but provides complete referee reports with journal tier recommendations. Our automated analysis system processed over 37,000 papers across multiple mathematical categories, revealing significant error rates and quality distributions. Remarkably, the system identified errors in papers spanning three centuries of mathematics, including seven works by Leonhard Euler (1707-1783) in just 403 papers analyzed from the History category, as well as errors by Peter Gustav Lejeune Dirichlet (1805-1859) and contemporary Fields medalists. In Dynamical Systems (math.DS), we observed the highest error rate of 11.4% (2,347 errors in 20,666 papers), while Numerical Analysis (math.NA) showed 9.6% (2,271 errors in 23,761 papers). History and Overview (math.HO) exhibited 13.6% errors in preliminary analysis, including seven papers by Euler. In contrast, Geometric Topology (math.GT) showed 3.6% and Category Theory (math.CT) exhibited the lowest rate at 6.1% (228 errors in 3,720 papers). Beyond error detection, the system evaluated papers for journal suitability, recommending 0.4% for top generalist journals, 15.5% for top field-specific journals, and categorizing the remainder across specialist venues. These findings demonstrate both the universality of mathematical error across all eras and the feasibility of automated comprehensive mathematical peer review at scale. This work demonstrates that the methodology, while applied here to mathematics, is discipline-agnostic and could be readily extended to physics, computer science, and other fields represented in the ArXiv repository.



Macroeconomic Foundation of Monetary Accounting by Diagrams of Categorical Universals

Menéndez, Renée, Winschel, Viktor

arXiv.org Artificial Intelligence

We present a category theoretical formulation of the Monetary Macroeconomic Accounting Theory (MoMaT) of Menéndez and Winschel [2025]. We take macroeconomic (national) accounting systems to be composed from microeconomic double-entry systems with real and monetary units of accounts. Category theory is the compositional grammar and module system of mathematics which we use to lift micro accounting consistency to the macro level. The main function of money in MoMaT is for the repayment of loans and not for the exchange of goods, bridging the desynchronisation of input and output payments of producers. Accordingly, temporal accounting consistency is at the macroeconomic level. We show that the accounting for macroeconomies organised by a division of labor can be consistent and stable as a prerequisite for risk and GDP sharing of societies. We exemplify the theory by five sectoral agents of Labor and Resource owners, a Company as the productive sector, a Capitalist for profits, and a Bank as the financial sector providing loans to synchronise the micro and the macro levels of an economy. The dynamics is described by eight sectoral macroeconomic bookings in each period demonstrating stable convergence of the MoMaT in numerical simulations. The categorical program implements a consistent evolution of hierarchical loan repayment contracts by an endofunctor. The universal constructions of a limit verify all constraints as the sectoral investment and learning function at the macroeconomic level. The dual colimit computes the aggregated informations at the macro level as usual in the mathematics of transitions from local to global structures. We use visual diagrams to make complex economic relationships intuitive. This paper is meant to map economic to categorical concepts to enable interdisciplinary collaboration for digital twins of monetary accounting systems.


Towards a unified framework for programming paradigms: A systematic review of classification formalisms and methodological foundations

Vandeloise, Mikel

arXiv.org Artificial Intelligence

The rise of multi-paradigm languages challenges traditional classification methods, leading to practical software engineering issues like interoperability defects. This systematic literature review (SLR) maps the formal foundations of programming paradigms. Our objective is twofold: (1) to assess the state of the art of classification formalisms and their limitations, and (2) to identify the conceptual primitives and mathematical frameworks for a more powerful, reconstructive approach. Based on a synthesis of 74 primary studies, we find that existing taxonomies lack conceptual granularity, a unified formal basis, and struggle with hybrid languages. In response, our analysis reveals a strong convergence toward a compositional reconstruction of paradigms. This approach identifies a minimal set of orthogonal, atomic primitives and leverages mathematical frameworks, predominantly Type theory, Category theory and Unifying Theories of Programming (UTP), to formally guarantee their compositional properties. We conclude that the literature reflects a significant intellectual shift away from classification towards these promising formal, reconstructive frameworks. This review provides a map of this evolution and proposes a research agenda for their unification.


Alpay Algebra V: Multi-Layered Semantic Games and Transfinite Fixed-Point Simulation

Kilictas, Bugra, Alpay, Faruk

arXiv.org Artificial Intelligence

This paper extends the self-referential framework of Alpay Algebra into a multi-layered semantic game architecture where transfinite fixed-point convergence encompasses hierarchical sub-games at each iteration level. Building upon Alpay Algebra IV's empathetic embedding concept, we introduce a nested game-theoretic structure where the alignment process between AI systems and documents becomes a meta-game containing embedded decision problems. We formalize this through a composite operator $ϕ(\cdot, γ(\cdot))$ where $ϕ$ drives the main semantic convergence while $γ$ resolves local sub-games. The resulting framework demonstrates that game-theoretic reasoning emerges naturally from fixed-point iteration rather than being imposed externally. We prove a Game Theorem establishing existence and uniqueness of semantic equilibria under realistic cognitive simulation assumptions. Our verification suite includes adaptations of Banach's fixed-point theorem to transfinite contexts, a novel $ϕ$-topology based on the Kozlov-Maz'ya-Rossmann formula for handling semantic singularities, and categorical consistency tests via the Yoneda lemma. The paper itself functions as a semantic artifact designed to propagate its fixed-point patterns in AI embedding spaces -- a deliberate instantiation of the "semantic virus" concept it theorizes. All results are grounded in category theory, information theory, and realistic AI cognition models, ensuring practical applicability beyond pure mathematical abstraction.


The Gauss-Markov Adjunction: Categorical Semantics of Residuals in Supervised Learning

Kamiura, Moto

arXiv.org Machine Learning

Enhancing the intelligibility and interpretability of machine learning is a crucial task in responding to the demand for Explicability as an AI principle, and in promoting the better social implementation of AI. The aim of our research is to contribute to this improvement by reformulating machine learning models through the lens of category theory, thereby developing a semantic framework for structuring and understanding AI systems. Our categorical modeling in this paper clarifies and formalizes the structural interplay between residuals and parameters in supervised learning. The present paper focuses on the multiple linear regression model, which represents the most basic form of supervised learning. By defining two concrete categories corresponding to parameters and data, along with an adjoint pair of functors between them, we introduce our categorical formulation of supervised learning. We show that the essential structure of this framework is captured by what we call the Gauss-Markov Adjunction. Within this setting, the dual flow of information can be explicitly described as a correspondence between variations in parameters and residuals. The ordinary least squares estimator for the parameters and the minimum residual are related via the preservation of limits by the right adjoint functor. Furthermore, we position this formulation as an instance of extended denotational semantics for supervised learning, and propose applying a semantic perspective developed in theoretical computer science as a formal foundation for Explicability in AI.


Types of Relations: Defining Analogies with Category Theory

Ott, Claire, Jäkel, Frank

arXiv.org Artificial Intelligence

In order to behave intelligently both humans and machines have to represent their knowledge adequately for how it is used. Humans often use analogies to transfer their knowledge to new domains, or help others with this transfer via explanations. Hence, an important question is: What representation can be used to construct, find, and evaluate analogies? In this paper, we study features of a domain that are important for constructing analogies. We do so by formalizing knowledge domains as categories. We use the well-known example of the analogy between the solar system and the hydrogen atom to demonstrate how to construct domain categories. We also show how functors, pullbacks, and pushouts can be used to define an analogy, describe its core and a corresponding blend of the underlying domains.


Accelerating Machine Learning Systems via Category Theory: Applications to Spherical Attention for Gene Regulatory Networks

Abbott, Vincent, Kamiya, Kotaro, Glowacki, Gerard, Atsumi, Yu, Zardini, Gioele, Maruyama, Yoshihiro

arXiv.org Artificial Intelligence

How do we enable artificial intelligence models to improve themselves? This is central to exponentially improving generalized artificial intelligence models, which can improve their own architecture to handle new problem domains in an efficient manner that leverages the latest hardware. However, current automated compilation methods are poor, and efficient algorithms require years of human development. In this paper, we use neural circuit diagrams, based in category theory, to prove a general theorem related to deep learning algorithms, guide the development of a novel attention algorithm catered to the domain of gene regulatory networks, and produce a corresponding efficient kernel. The algorithm we propose, spherical attention, shows that neural circuit diagrams enable a principled and systematic method for reasoning about deep learning architectures and providing high-performance code. By replacing SoftMax with an $L^2$ norm as suggested by diagrams, it overcomes the special function unit bottleneck of standard attention while retaining the streaming property essential to high-performance. Our diagrammatically derived \textit{FlashSign} kernel achieves comparable performance to the state-of-the-art, fine-tuned FlashAttention algorithm on an A100, and $3.6\times$ the performance of PyTorch. Overall, this investigation shows neural circuit diagrams' suitability as a high-level framework for the automated development of efficient, novel artificial intelligence architectures.


Developing a Foundation of Vector Symbolic Architectures Using Category Theory

Shaw, Nolan P, Furlong, P Michael, Anderson, Britt, Orchard, Jeff

arXiv.org Artificial Intelligence

At the risk of overstating the case, connectionist approaches to machine learning, i.e. neural networks, are enjoying a small vogue right now. However, these methods require large volumes of data and produce models that are uninterpretable to humans. An alternative framework that is compatible with neural networks and gradient-based learning, but explicitly models compositionality, is Vector Symbolic Architectures (VSAs). VSAs are a family of algebras on high-dimensional vector representations. They arose in cognitive science from the need to unify neural processing and the kind of symbolic reasoning that humans perform. While machine learning methods have benefited from category theoretical analyses, VSAs have not yet received similar treatment. In this paper, we present a first attempt at applying category theory to VSAs. Specifically, we conduct a brief literature survey demonstrating the lacking intersection of these two topics, provide a list of desiderata for VSAs, and propose that VSAs may be understood as a (division) rig in a category enriched over a monoid in Met (the category of Lawvere metric spaces). This final contribution suggests that VSAs may be generalised beyond current implementations. It is our hope that grounding VSAs in category theory will lead to more rigorous connections with other research, both within and beyond, learning and cognition.


Neural Network Symmetrisation in Concrete Settings

Cornish, Rob

arXiv.org Artificial Intelligence

Cornish (2024) recently gave a general theory of neural network symmetrisation in the abstract context of Markov categories. We give a high-level overview of these results, and their concrete implications for the symmetrisation of deterministic functions and of Markov kernels.