Collaborating Authors

Machine Learning authors/titles Oct 2021 – cs.LG – arXiv


Journal-ref: International Cross-Domain Conference for Machine Learning and Knowledge Extraction 2021 Aug 17 (pp. 293-308). Springer, Cham.

Various Convolutions for model compression and acceleration


Deep learning의 시대를 살고 있는 요즘, Deep learning은 Computer Vision, Natural Language Processing (NLP), Audio Processing, Speech Recognition, Information Retrieval 등 수많은 문제를 해결하고 있습니다. 이처럼 다양하고 복잡한 Task를 정확히…

r/MachineLearning - [Research] Help relating to a theorem in machine learning


This is related to a theorem that I have proved and its relation (or not) to an existing result. Essentially, I have shown that PAC-learning is undecidable in the Turing sense. The arxiv link to the paper is I am told that this is provable as a corollary of existing results. I was hinted that the fundamental theorem of statistical machine learning that relates the VC dimension and PAC-learning could be used to prove the undecidability of PAC-learning.

Everyone Can Understand Machine Learning… and More!


If you have trouble reading this email, see it on a web browser. Work in the AI field is moving forward very quickly. Today Papers with Code announced their partnership with arXiv, where code links are now shown on arXiv articles, and authors can submit code through arXiv, making it a great addition to avid researchers and practitioners. NeurIPS also announced a cool challenge, the 2020 ML Reproducibility Challenge sponsored by Papers with Code, encouraging people who work with ML to participate (including enthusiasts!). If you'd like to learn more, check out their announcement, it sounds pretty neat.

Learning compositional programs with arguments and sampling Artificial Intelligence

One of the most challenging goals in designing intelligent systems is empowering them with the ability to synthesize programs from data. Namely, given specific requirements in the form of input/output pairs, the goal is to train a machine learning model to discover a program that satisfies those requirements. A recent class of methods exploits combinatorial search procedures and deep learning to learn compositional programs. However, they usually generate only toy programs using a domain-specific language that does not provide any high-level feature, such as function arguments, which reduces their applicability in real-world settings. We extend upon a state of the art model, AlphaNPI, by learning to generate functions that can accept arguments. This improvement will enable us to move closer to real computer programs. Moreover, we investigate employing an Approximate version of Monte Carlo Tree Search (A-MCTS) to speed up convergence. We showcase the potential of our approach by learning the Quicksort algorithm, showing how the ability to deal with arguments is crucial for learning and generalization.