Goto

Collaborating Authors

 Rajamani, Sriram


Frustrated with Code Quality Issues? LLMs can Help!

arXiv.org Artificial Intelligence

As software projects progress, quality of code assumes paramount importance as it affects reliability, maintainability and security of software. For this reason, static analysis tools are used in developer workflows to flag code quality issues. However, developers need to spend extra efforts to revise their code to improve code quality based on the tool findings. In this work, we investigate the use of (instruction-following) large language models (LLMs) to assist developers in revising code to resolve code quality issues. We present a tool, CORE (short for COde REvisions), architected using a pair of LLMs organized as a duo comprised of a proposer and a ranker. Providers of static analysis tools recommend ways to mitigate the tool warnings and developers follow them to revise their code. The \emph{proposer LLM} of CORE takes the same set of recommendations and applies them to generate candidate code revisions. The candidates which pass the static quality checks are retained. However, the LLM may introduce subtle, unintended functionality changes which may go un-detected by the static analysis. The \emph{ranker LLM} evaluates the changes made by the proposer using a rubric that closely follows the acceptance criteria that a developer would enforce. CORE uses the scores assigned by the ranker LLM to rank the candidate revisions before presenting them to the developer. CORE could revise 59.2% Python files (across 52 quality checks) so that they pass scrutiny by both a tool and a human reviewer. The ranker LLM is able to reduce false positives by 25.8% in these cases. CORE produced revisions that passed the static analysis tool in 76.8% Java files (across 10 quality checks) comparable to 78.3% of a specialized program repair tool, with significantly much less engineering efforts.


Programming by Rewards

arXiv.org Artificial Intelligence

We formalize and study ``programming by rewards'' (PBR), a new approach for specifying and synthesizing subroutines for optimizing some quantitative metric such as performance, resource utilization, or correctness over a benchmark. A PBR specification consists of (1) input features $x$, and (2) a reward function $r$, modeled as a black-box component (which we can only run), that assigns a reward for each execution. The goal of the synthesizer is to synthesize a "decision function" $f$ which transforms the features to a decision value for the black-box component so as to maximize the expected reward $E[r \circ f (x)]$ for executing decisions $f(x)$ for various values of $x$. We consider a space of decision functions in a DSL of loop-free if-then-else programs, which can branch on linear functions of the input features in a tree-structure and compute a linear function of the inputs in the leaves of the tree. We find that this DSL captures decision functions that are manually written in practice by programmers. Our technical contribution is the use of continuous-optimization techniques to perform synthesis of such decision functions as if-then-else programs. We also show that the framework is theoretically-founded ---in cases when the rewards satisfy nice properties, the synthesized code is optimal in a precise sense. We have leveraged PBR to synthesize non-trivial decision functions related to search and ranking heuristics in the PROSE codebase (an industrial strength program synthesis framework) and achieve competitive results to manually written procedures over multiple man years of tuning. We present empirical evaluation against other baseline techniques over real-world case studies (including PROSE) as well on simple synthetic benchmarks.


Debugging Machine Learning Tasks

arXiv.org Machine Learning

Unlike traditional programs (such as operating systems or word processors) which have large amounts of code, machine learning tasks use programs with relatively small amounts of code (written in machine learning libraries), but voluminous amounts of data. Just like developers of traditional programs debug errors in their code, developers of machine learning tasks debug and fix errors in their data. However, algorithms and tools for debugging and fixing errors in data are less common, when compared to their counterparts for detecting and fixing errors in code. In this paper, we consider classification tasks where errors in training data lead to misclassifications in test points, and propose an automated method to find the root causes of such misclassifications. Our root cause analysis is based on Pearl's theory of causation, and uses Pearl's PS (Probability of Sufficiency) as a scoring metric. Our implementation, Psi, encodes the computation of PS as a probabilistic program, and uses recent work on probabilistic programs and transformations on probabilistic programs (along with gray-box models of machine learning algorithms) to efficiently compute PS. Psi is able to identify root causes of data errors in interesting data sets.


R2: An Efficient MCMC Sampler for Probabilistic Programs

AAAI Conferences

We present a new Markov Chain Monte Carlo (MCMC) sampling algorithm for probabilistic programs. Our approach and tool, called R2, has the unique feature of employing program analysis in order to improve the efficiencyof MCMC sampling. Given an input program P, R2 propagates observations in P backwards to obtaina semantically equivalent program P' in which every probabilistic assignment is immediately followed by an observe statement. Inference is performed by a suitably modified version of the Metropolis-Hastings algorithm that exploits the structure of the program P'. This has the overall effect of preventing rejections due to program executions that fail to satisfy observations in P. We formalize the semantics of probabilistic programs and rigorously prove the correctness of R2. We also empirically demonstrate the effectiveness of R2—in particular, we show that R2 is able to produce results of similar quality as the CHURCH and STAN probabilistic programming tools with much shorter execution time.


Automatically Generating Algebra Problems

AAAI Conferences

We propose computer-assisted techniques for helping with pedagogy in Algebra. In particular, given a proof problem p (of the form “Left-hand-side-term = Right-hand-side-term”), we show how to automatically generate problems that are similar to p. We believe that such a tool can be used by teachers in making examinations where they need to test students on problems similar to what they taught in class, and by students in generating practice problems tailored to their specific needs. Our first insight is that we can generalize p syntactically to a query Q that implicitly represents a set of problems [[Q]] (which includes p). Our second insight is that we can explore the space of problems [[Q]] automatically, use classical results from polynomial identity testing to generate only those problems in [[Q]] that are correct, and then use pruning techniques to generate only unique and interesting problems. Our third insight is that with a small amount of manual tuning on the query Q, the user can interactively guide the computer to generate problems of interest to her. We present the technical details of the above mentioned steps, and also describe a tool where these steps have been implemented. We also present an empirical evaluation on a wide variety of problems from various sub-fields of algebra including polynomials, trigonometry, calculus, determinants etc. Our tool is able to generate a rich corpus of similar problems from each given problem; while some of these similar problems were already present in the textbook, several were new!