Ray Solomonoff, a physicist who was one of the founders of the field of artificial intelligence, died on Dec. 7 in Boston. He was 83 and had homes in New Ipswich, N.H., and Cambridge, Mass. The cause was a ruptured brain aneurysm, said his wife, Grace. As a child Mr. Solomonoff developed what would become a lifelong passion for mathematical theorems, and as a teenager he became captivated with idea of creating machines that could learn and ultimately think. In 1952 he met Marvin Minsky, a cognitive scientist who was also exploring the idea of machine learning, and John McCarthy, a young mathematician.
Algorithmic probability has shown some promise in dealing with the probability problem in the Everett interpretation, since it provides an objective, single-case probability measure. Many find the Everettian cosmology to be overly extravagant, however, and algorithmic probability has also provided improved models of subjective probability and Bayesian reasoning. I attempt here to generalize algorithmic Everettianism to more Bayesian and subjectivist interpretations. I present a general framework for applying generative probability, of which algorithmic probability can be considered a special case. I apply this framework to two commonly vexing thought experiments that have immediate application to quantum probability: the Sleeping Beauty and Replicator experiments.
Solomonoff induction and its use of the Kolmogorov complexity has fascinated me, to me it's way to overcoming overfitting which happens when we apply more and more complex models to explain the data. I wanted to use it as a way to objectively determine when to switch from a simple model to a more complicated one. An actual implementation of Solomonoff induction is computationally prohibitive. I wanted to try out a reduced version of the induction to see how well it would work. I'm not claiming that I actually implemented Solomonoff's theory, this is an attempt at a practical approximation of it.
We reminisce and discuss applications of algorithmic probability to a wide range of problems in artificial intelligence, philosophy and technological society. We propose that Solomonoff has effectively axiomatized the field of artificial intelligence, therefore establishing it as a rigorous scientific discipline. We also relate to our own work in incremental machine learning and philosophy of complexity.
The paper demonstrates that falsifiability is fundamental to learning. We prove the following theorem for statistical learning and sequential prediction: If a theory is falsifiable then it is learnable -- i.e. admits a strategy that predicts optimally. An analogous result is shown for universal induction.