Goto

Collaborating Authors

 program space


Gradient-Based Program Repair: Fixing Bugs in Continuous Program Spaces

Silva, André, Thorén, Gustav, Monperrus, Martin

arXiv.org Artificial Intelligence

Automatic program repair seeks to generate correct code from buggy programs, with most approaches searching the correct program in a discrete, symbolic space of source code tokens. This symbolic search is fundamentally limited by its inability to directly reason about program behavior. We introduce Gradient-Based Program Repair (GBPR), a new paradigm that reframes program repair as continuous optimization in a differentiable numerical program space. Our core insight is to compile symbolic programs into differentiable numerical representations, enabling search in the numerical program space directly guided by program behavior. To evaluate GBPR, we present RaspBugs, a new benchmark of 1,466 buggy symbolic RASP programs and their respective numerical representations. Our experiments demonstrate that GBPR can effectively repair buggy symbolic programs by gradient-based optimization in the numerical program space, with convincing repair trajectories. To our knowledge, we are the first to state program repair as continuous optimization in a numerical program space. Our work establishes a new direction for program repair research, bridging two rich worlds: continuous optimization and program behavior.




Modelling Program Spaces in Program Synthesis with Constraints

Hinnerichs, Tilman, Swinkels, Bart, de Jong, Jaap, Reid, Reuben Gardos, Magirescu, Tudor, Yorke-Smith, Neil, Dumancic, Sebastijan

arXiv.org Artificial Intelligence

A core challenge in program synthesis is taming the large space of possible programs. Since program synthesis is essentially a combinatorial search, the community has sought to leverage powerful combinatorial constraint solvers. Here, constraints are used to express the program semantics, but not as a potentially potent tool to remove unwanted programs. Recent inductive logic programming approaches introduce constraints on the program's syntax to be synthesized. These syntactic constraints allow for checking and propagating a constraint without executing the program, and thus for arbitrary operators. In this work, we leverage syntactic constraints to model program spaces, defining not just solutions that are feasible, but also ones that are likely useful. To demonstrate this idea, we introduce BART, a solver that efficiently propagates and solves these constraints. We evaluate BART on program space enumeration tasks, finding that the constraints eliminate up to 99 percent of the program space, and that modeling program spaces significantly reduces enumeration time.


Software 2.0. I sometimes see people refer to neural…

#artificialintelligence

I sometimes see people refer to neural networks as just "another tool in your machine learning toolbox". They have some pros and cons, they work here or there, and sometimes you can use them to win Kaggle competitions. Unfortunately, this interpretation completely misses the forest for the trees. Neural networks are not just another classifier, they represent the beginning of a fundamental shift in how we develop software. The "classical stack" of Software 1.0 is what we're all familiar with -- it is written in languages such as Python, C, etc.


Composition Machines: Programming Self-Organising Software Models for the Emergence of Sequential Program Spaces

Arellanes, Damian

arXiv.org Artificial Intelligence

We are entering a new era in which software systems are becoming more and more complex and larger. So, the composition of such systems is becoming infeasible by manual means. To address this challenge, self-organising software models represent a promising direction since they allow the (bottom-up) emergence of complex computational structures from simple rules. In this paper, we propose an abstract machine, called the composition machine, which allows the definition and the execution of such models. Unlike typical abstract machines, our proposal does not compute individual programs but enables the emergence of multiple programs at once. We particularly present the machine's semantics and provide examples to demonstrate its operation with well-known rules from the realm of Boolean logic and elementary cellular automata.


Learning to Infer Graphics Programs from Hand-Drawn Images

Ellis, Kevin, Ritchie, Daniel, Solar-Lezama, Armando, Tenenbaum, Josh

Neural Information Processing Systems

We introduce a model that learns to convert simple hand drawings into graphics programs written in a subset of \LaTeX.~The model combines techniques from deep learning and program synthesis. We learn a convolutional neural network that proposes plausible drawing primitives that explain an image. These drawing primitives are a specification (spec) of what the graphics program needs to draw. We learn a model that uses program synthesis techniques to recover a graphics program from that spec. These programs have constructs like variable bindings, iterative loops, or simple kinds of conditionals. With a graphics program in hand, we can correct errors made by the deep network and extrapolate drawings.


Learning to Infer Graphics Programs from Hand-Drawn Images

Ellis, Kevin, Ritchie, Daniel, Solar-Lezama, Armando, Tenenbaum, Josh

Neural Information Processing Systems

We introduce a model that learns to convert simple hand drawings into graphics programs written in a subset of \LaTeX.~The model combines techniques from deep learning and program synthesis. We learn a convolutional neural network that proposes plausible drawing primitives that explain an image. These drawing primitives are a specification (spec) of what the graphics program needs to draw. We learn a model that uses program synthesis techniques to recover a graphics program from that spec. These programs have constructs like variable bindings, iterative loops, or simple kinds of conditionals. With a graphics program in hand, we can correct errors made by the deep network and extrapolate drawings.


Software 2.0 – Andrej Karpathy – Medium

#artificialintelligence

I sometimes see people refer to neural networks as just "another tool in your machine learning toolbox". They have some pros and cons, they work here or there, and sometimes you can use them to win Kaggle competitions. Unfortunately, this interpretation completely misses the forest for the trees. Neural networks are not just another classifier, they represent the beginning of a fundamental shift in how we write software. The "classical stack" of Software 1.0 is what we're all familiar with -- it is written in languages such as Python, C, etc. It consists of explicit instructions to the computer written by a programmer.


Sampling for Bayesian Program Learning

Ellis, Kevin, Solar-Lezama, Armando, Tenenbaum, Josh

Neural Information Processing Systems

Towards learning programs from data, we introduce the problem of sampling programs from posterior distributions conditioned on that data. Within this setting, we propose an algorithm that uses a symbolic solver to efficiently sample programs. The proposal combines constraint-based program synthesis with sampling via random parity constraints. We give theoretical guarantees on how well the samples approximate the true posterior, and have empirical results showing the algorithm is efficient in practice, evaluating our approach on 22 program learning problems in the domains of text editing and computer-aided programming.