This is related to a theorem that I have proved and its relation (or not) to an existing result. Essentially, I have shown that PAC-learning is undecidable in the Turing sense. The arxiv link to the paper is https://arxiv.org/abs/1808.06324 I am told that this is provable as a corollary of existing results. I was hinted that the fundamental theorem of statistical machine learning that relates the VC dimension and PAC-learning could be used to prove the undecidability of PAC-learning.
If you have trouble reading this email, see it on a web browser. Work in the AI field is moving forward very quickly. Today Papers with Code announced their partnership with arXiv, where code links are now shown on arXiv articles, and authors can submit code through arXiv, making it a great addition to avid researchers and practitioners. NeurIPS also announced a cool challenge, the 2020 ML Reproducibility Challenge sponsored by Papers with Code, encouraging people who work with ML to participate (including enthusiasts!). If you'd like to learn more, check out their announcement, it sounds pretty neat.
De Toni, Giovanni, Erculiani, Luca, Passerini, Andrea
One of the most challenging goals in designing intelligent systems is empowering them with the ability to synthesize programs from data. Namely, given specific requirements in the form of input/output pairs, the goal is to train a machine learning model to discover a program that satisfies those requirements. A recent class of methods exploits combinatorial search procedures and deep learning to learn compositional programs. However, they usually generate only toy programs using a domain-specific language that does not provide any high-level feature, such as function arguments, which reduces their applicability in real-world settings. We extend upon a state of the art model, AlphaNPI, by learning to generate functions that can accept arguments. This improvement will enable us to move closer to real computer programs. Moreover, we investigate employing an Approximate version of Monte Carlo Tree Search (A-MCTS) to speed up convergence. We showcase the potential of our approach by learning the Quicksort algorithm, showing how the ability to deal with arguments is crucial for learning and generalization.