Neural Program Synthesis By Self-Learning
Xu, Yifan, Dai, Lu, Singh, Udaikaran, Zhang, Kening, Tu, Zhuowen
–arXiv.org Artificial Intelligence
A BSTRACT Neural inductive program synthesis is a task generating instructions that can produce desired outputs from given inputs. In this paper, we focus on the generation of a chunk of assembly code that can be executed to match a state change inside the CPU and RAM. We develop a neural program synthesis algorithm, AutoAssem-blet, learned via self-learning reinforcement learning that explores the large code space efficiently. Policy networks and value networks are learned to reduce the breadth and depth of the Monte Carlo Tree Search, resulting in better synthesis performance. We also propose an effective multi-entropy policy sampling technique to alleviate online update correlations. We apply AutoAssemblet to basic programming tasks and show significant higher success rates compared to several competing baselines. Much progress has been made in the field with the development of methods along the vein of neural program synthesis (Parisotto et al., 2016; Balog et al., 2017; Bunel et al., 2018; Hayati et al., 2018; Desai et al., 2016; Yin & Neubig, 2017; Kant, 2018). Neural program synthesis models build on the top of neural network architectures to synthesize human-readable programs that match desired executions.
arXiv.org Artificial Intelligence
Oct-13-2019