Goto

Collaborating Authors

 performance engineering


SWE-fficiency: Can Language Models Optimize Real-World Repositories on Real Workloads?

Ma, Jeffrey Jian, Hashemi, Milad, Yazdanbakhsh, Amir, Swersky, Kevin, Press, Ofir, Li, Enhui, Reddi, Vijay Janapa, Ranganathan, Parthasarathy

arXiv.org Artificial Intelligence

Optimizing the performance of large-scale software repositories demands expertise in code reasoning and software engineering (SWE) to reduce runtime while preserving program correctness. However, most benchmarks emphasize what to fix rather than how to fix code. We introduce SWE-fficiency, a benchmark for evaluating repository-level performance optimization on real workloads. Our suite contains 498 tasks across nine widely used data-science, machine-learning, and HPC repositories (e.g., numpy, pandas, scipy): given a complete codebase and a slow workload, an agent must investigate code semantics, localize bottlenecks and relevant tests, and produce a patch that matches or exceeds expert speedup while passing the same unit tests. To enable this how-to-fix evaluation, our automated pipeline scrapes GitHub pull requests for performance-improving edits, combining keyword filtering, static analysis, coverage tooling, and execution validation to both confirm expert speedup baselines and identify relevant repository unit tests. Empirical evaluation of state-of-the-art agents reveals significant underperformance. On average, agents achieve less than 0.15x the expert speedup: agents struggle in localizing optimization opportunities, reasoning about execution across functions, and maintaining correctness in proposed edits. We release the benchmark and accompanying data pipeline to facilitate research on automated performance engineering and long-horizon software reasoning.


4 Software Testing Trends to Look Forward to

#artificialintelligence

The upcoming trends in software testing will enable companies to enhance customer and business value. Fremont, CA: Software testing is transforming. It is constantly developing and evolving with the shifting technology landscape, from AI to ML. In addition, the software testing industry is quickly expanding. Because software testing is crucial, every company will need to be on top of their game as they enter the next decade.


Machine Learning: How it Improves Performance Engineering

#artificialintelligence

Enterprise software, as well as other kinds, remains a complicated endeavor, thus necessitating the use of modern means to gauge, analyze, and adapt their performance. And one of the most popular technologies in the performance engineering market right now is machine learning. Since it has demonstrated an unparalleled ability to not only help foresee performance issues and fix them. When used in the right manner -- this combination can also help performance engineering teams to steer clear of any issues at all completely. It is because machine learning comes equipped with the ability to interpret and analyze data in real-time, thus delivering valuable insights about the system's performance.


Moore's Law: What Comes Next?

Communications of the ACM

Computer designers are becoming increasingly concerned about the ending of Moore's Law, and what it means for users if the industry can no longer count on the idea that the density of logic circuits will double every two years, as it has for close to half a century. It may mean radical changes to the way users think about software. Leading researchers in semiconductor design point out that, although logic density is butting up against physical limits, it does not necessarily spell the end of Moore's Law itself. Gordon Moore's speech at the 1975 International Electron Device Meeting (IEDM) predicted significant increases in chip size and improvements in circuit design as part of the scaling process, in addition to regular reductions in transistor size and interconnect spacing. During a September virtual meeting of the IEEE International Roadmap for Devices and Systems group, chairman and Intel director of technology strategy Paolo Gargini, argued, "Though Gordon made this clear, people have concentrated only on dimensional scaling. That's the reason why people have doubts about the next technology nodes. It appears as though we are in a crisis, but we are not, because of the other two components."


Semiconductor Miniaturisation Is Running Out Of Steam. Time To Focus On Smarter Algorithms

#artificialintelligence

Transistors have brought a plethora of advances and growth in computer performance over the past few decades. These improvements in computer performance come from decades of miniaturisation of computer components, for instance, from a room-sized computer to a cellphone. For decades, programmers have been able to prioritise writing code quickly rather than writing it so that it runs quickly since smaller, faster computer chips have always been able to pick up the slack. In 1975, Intel founder Gordon Moore predicted the regularity of this miniaturisation trend, which is now called Moore's law -- the number of transistors on computer chips would double every 24 months. The researchers broke down their recommendations into the categories, they are software, algorithms, and hardware architecture as mentioned below.


Intern - ROCS - Performance Analytics with Machine Learning and AI - Palo Alto California - February-16-2019 (GyH7j)

#artificialintelligence

VMware Product Intern Opportunities: VMware recognizes that today s students are tomorrow s trailblazers and we value the opportunity to benefit from your fresh perspective. If you thrive in an open, innovative, technology-driven culture, VMware could be the place for you! You will be exposed to a wide range of software platform technologies that are utilized by customers all over the world. Business Summary: VMware is a global leader in cloud infrastructure and business mobility. VMware accelerates customers digital transformation journey by enabling enterprises to master a software-defined approach to business and IT.