Goto

Collaborating Authors

 Forrest, Stephanie


Automated Program Repair: Emerging trends pose and expose problems for benchmarks

arXiv.org Artificial Intelligence

A variety of techniques have been developed, e.g., evolutionary computation[60, 133], methods incorporating templated mutation operators[71], semantic inference techniques[79] targeting single-cause defects, and methods designed to handle multi-hunk bugs[100]. Increasingly, researchers have applied ML-based methods to APR tasks (Section 3), but data leakage is a concern(Section 4). Each new technique, or modification of an existing technique, tends to be developed by an independent research team, without reference to a common, formal definition of APR. Benchmarks are not enough to standardize evaluation on their own (Section 5). As motivating examples, consider the following inconsistencies in the published literature: Correctness. VFix [123] identifies correct patches that pass all test cases and are semantically or syntactically equivalent to the original bug-fix, while VRepair[26] reports repair accuracy in terms of semantic equivalence to the original bug-fix, and SynFix [10] defines correctness simply as passing the test cases. Each of these is a reasonable definition, but collectively, their differences make it difficult to compare results.


GEVO-ML: Optimizing Machine Learning Code with Evolutionary Computation

arXiv.org Artificial Intelligence

Parallel accelerators, such as GPUs, are key enablers for large-scale Machine Learning (ML) applications. However, ML model developers often lack detailed knowledge of the underlying system architectures, while system programmers usually do not have a high-level understanding of the ML model that runs on the specific system. To mitigate this gap between two relevant aspects of domain knowledge, this paper proposes GEVO-ML, a tool for automatically discovering optimization opportunities and tuning the performance of ML kernels, where the model and training/prediction processes are uniformly represented in a single intermediate language, the Multiple-Layer Intermediate Representation (MLIR). GEVO-ML uses multi-objective evolutionary search to find edits (mutations) to MLIR code that ultimately runs on GPUs, improving performance on desired criteria while retaining required functionality. We demonstrate GEVO-ML on two different ML workloads for both model training and prediction. GEVO-ML finds significant Pareto improvements for these models, achieving 90.43% performance improvement when model accuracy is relaxed by 2%, from 91.2% to 89.3%. For the training workloads, GEVO-ML finds a 4.88% improvement in model accuracy, from 91% to 96%, without sacrificing training or testing speed. Our analysis of key GEVO-ML mutations reveals diverse code modifications, while might be foreign to human developers, achieving similar effects with how human developers improve model design, for example, by changing learning rates or pruning non-essential layer parameters.


When will a Genetic Algorithm Outperform Hill Climbing

Neural Information Processing Systems

HoUand Dept. of Psychology University of Michigan Ann Arbor, MI 48109 StephanieForrest Dept. of Computer Science University of New Mexico Albuquerque, NM 87131 Abstract We analyze a simple hill-climbing algorithm (RMHC) that was previously shownto outperform a genetic algorithm (GA) on a simple "Royal Road" function. We then analyze an "idealized" genetic algorithm (IGA) that is significantly faster than RMHC and that gives a lower bound for GA speed. We identify the features of the IGA that give rise to this speedup, and discuss how these features can be incorporated into a real GA. 1 INTRODUCTION Our goal is to understand the class of problems for which genetic algorithms (GA) are most suited, and in particular, for which they will outperform other search algorithms. Several studies have empirically compared GAs with other search and optimization methods such as simple hill-climbing (e.g., Davis, 1991), simulated annealing (e.g., Ingber & Rosen, 1992), linear, nonlinear, and integer programming techniques, and other traditional optimization techniques (e.g., De Jong, 1975). However, such comparisons typically compare one version of the GA with a second algorithm on a single problem or set of problems, often using performance criteria which may not be appropriate.


When will a Genetic Algorithm Outperform Hill Climbing

Neural Information Processing Systems

We analyze a simple hill-climbing algorithm (RMHC) that was previously shown to outperform a genetic algorithm (GA) on a simple "Royal Road" function. We then analyze an "idealized" genetic algorithm (IGA) that is significantly faster than RMHC and that gives a lower bound for GA speed. We identify the features of the IGA that give rise to this speedup, and discuss how these features can be incorporated into a real GA. 1 INTRODUCTION Our goal is to understand the class of problems for which genetic algorithms (GA) are most suited, and in particular, for which they will outperform other search algorithms. Several studies have empirically compared GAs with other search and optimization methods such as simple hill-climbing (e.g., Davis, 1991), simulated annealing (e.g., Ingber & Rosen, 1992), linear, nonlinear, and integer programming techniques, and other traditional optimization techniques (e.g., De Jong, 1975). However, such comparisons typically compare one version of the GA with a second algorithm on a single problem or set of problems, often using performance criteria which may not be appropriate.