Goto

Collaborating Authors

 autoperf



A Zero-Positive Learning Approach for Diagnosing Software Performance Regressions

Neural Information Processing Systems

The field of machine programming (MP), the automation of the development of software, is making notable research advances. This is, in part, due to the emergence of a wide range of novel techniques in machine learning. In this paper, we apply MP to the automation of software performance regression testing. A performance regression is a software performance degradation caused by a code change.




A Zero-Positive Learning Approach for Diagnosing Software Performance Regressions

Neural Information Processing Systems

The field of machine programming (MP), the automation of the development of software, is making notable research advances. This is, in part, due to the emergence of a wide range of novel techniques in machine learning. In this paper, we apply MP to the automation of software performance regression testing. A performance regression is a software performance degradation caused by a code change. We demonstrate AutoPerf's generality and efficacy against 3 types of performance regressions across 10 real performance bugs in 7 benchmark and open-source programs.


Reviews: A Zero-Positive Learning Approach for Diagnosing Software Performance Regressions

Neural Information Processing Systems

STRONG POINTS/CONTRIBUTIONS 1) The false positive rates and false negative rates observed when using AutoPerf are impressively low. NEGATIVE POINTS 1) The paper lacks a lot of technical depth and novelty… autoencoders for anomaly detection are widely used, and the problem domain (detecting performance bugs) has been studied previously as well. Knowing what was changed in the code between P_i and P_i 1 could be very, very helpful. DETAILED COMMENTS One comment is that I'm not sure it makes a lot of sense to train separate autoencoders for each function (or group of functions, if you are doing the k-means thing). Likely, there are going to be certain characteristics of the distributions that are shares across all functions, and I worry that you are wasting a lot of compute power by relearning everything.


A Zero-Positive Learning Approach for Diagnosing Software Performance Regressions

Neural Information Processing Systems

The field of machine programming (MP), the automation of the development of software, is making notable research advances. This is, in part, due to the emergence of a wide range of novel techniques in machine learning. In this paper, we apply MP to the automation of software performance regression testing. A performance regression is a software performance degradation caused by a code change. We demonstrate AutoPerf's generality and efficacy against 3 types of performance regressions across 10 real performance bugs in 7 benchmark and open-source programs.


A Zero-Positive Learning Approach for Diagnosing Software Performance Regressions

Alam, Mejbah, Gottschlich, Justin, Tatbul, Nesime, Turek, Javier S., Mattson, Tim, Muzahid, Abdullah

Neural Information Processing Systems

The field of machine programming (MP), the automation of the development of software, is making notable research advances. This is, in part, due to the emergence of a wide range of novel techniques in machine learning. In this paper, we apply MP to the automation of software performance regression testing. A performance regression is a software performance degradation caused by a code change. We demonstrate AutoPerf's generality and efficacy against 3 types of performance regressions across 10 real performance bugs in 7 benchmark and open-source programs.


Intel previews AI advances in software testing, sequence models, and explainability

#artificialintelligence

This week marks the kickoff of Neural Information Processing Systems (NeurIPS), one of the largest AI and machine learning conferences globally. NeurIPS 2017 and NeuIPS 2018 received 3,240 and 4,854 research paper submissions, respectively, and this year's event -- which takes place from December 8 to December 14 in Vancouver -- is on track to handily break those records. Researchers from Intel will be in attendance, as will those from tech giants like Google, Facebook, Apple, Uber, Alibaba, Baidu, and countless others. For its part, the Santa Clara, California-based chipmaker said it intends to host three dozen conference, workshop, and spotlight sessions covering topics like deep equilibrium models, imitation learning, machine programming, and more. "Intel is making significant strides in advancing and scaling neural network technologies to handle increasingly complex and dynamic workloads -- from tackling challenges with memory to researching new adaptive learning techniques," wrote Dr. Rich Uhlig, senior fellow and managing director of Intel Labs, in a blog post.