Goto

Collaborating Authors

 description tape


Programs as Singularities

Murfet, Daniel, Troiani, Will

arXiv.org Artificial Intelligence

We develop a correspondence between the structure of Turing machines and the structure of singularities of real analytic functions, based on connecting the Ehrhard-Regnier derivative from linear logic with the role of geometry in Watanabe's singular learning theory. The correspondence works by embedding ordinary (discrete) Turing machine codes into a family of noisy codes which form a smooth parameter space. On this parameter space we consider a potential function which has Turing machines as critical points. By relating the Taylor series expansion of this potential at such a critical point to combinatorics of error syndromes, we relate the local geometry to internal structure of the Turing machine. The potential in question is the negative log-likelihood for a statistical model, so that the structure of the Turing machine and its associated singularity is further related to Bayesian inference. Two algorithms that produce the same predictive function can nonetheless correspond to singularities with different geometries, which implies that the Bayesian posterior can discriminate between distinct algorithmic implementations, contrary to a purely functional view of inference. In the context of singular learning theory our results point to a more nuanced understanding of Occam's razor and the meaning of simplicity in inductive inference.


Geometry of Program Synthesis

Clift, James, Murfet, Daniel, Wallbridge, James

arXiv.org Artificial Intelligence

When we say the code on the description tape of the physical UTM "is" We re-evaluate universal computation based on w what we actually mean is, adopting the thermodynamic the synthesis of Turing machines. This leads to a language, that the system is in a phase (a local minima of view of programs as singularities of analytic varieties the free energy) including the microstate c we associate to or, equivalently, as phases of the Bayesian w. However, when the system is in this phase its microstate posterior of a synthesis problem. This new point is not equal to c but rather undergoes rapid spontaneous of view reveals unexplored directions of research transitions between many microstates "near" c. in program synthesis, of which neural networks are a subset, for example in relation to phase transitions, So in any possible physical realisation of a UTM, a program complexity and generalisation. We also is realised by a phase of the physical system. Does this have lay the empirical foundations for these new directions any computational significance?