DaCe AD: Unifying High-Performance Automatic Differentiation for Machine Learning and Scientific Computing
Boudaoud, Afif, Calotoiu, Alexandru, Copik, Marcin, Hoefler, Torsten
–arXiv.org Artificial Intelligence
--Automatic differentiation (AD) is a set of techniques that systematically applies the chain rule to compute the gradients of functions without requiring human intervention. Although the fundamentals of this technology were established decades ago, it is experiencing a renaissance as it plays a key role in efficiently computing gradients for backpropagation in machine learning algorithms. AD is also crucial for many applications in scientific computing domains, particularly emerging techniques that integrate machine learning models within scientific simulations and schemes. Existing AD frameworks have four main limitations: limited support of programming languages, requiring code modifications for AD compatibility, limited performance on scientific computing codes, and a naive store-all solution for forward-pass data required for gradient calculations. These limitations force domain scientists to manually compute the gradients for large problems. This work presents DaCe AD, a general, efficient automatic differentiation engine that requires no code modifications. DaCe AD uses a novel ILP-based algorithm to optimize the trade-off between storing and recomputing to achieve maximum performance within a given memory constraint. We showcase the generality of our method by applying it to NPBench, a suite of HPC benchmarks with diverse scientific computing patterns, where we outperform JAX, a Python framework with state-of-the-art general AD capabilities, by more than 92 times on average without requiring any code changes. Automatic differentiation (AD) is a technique for calculating the derivatives of programs [1] that outperforms traditional methods like symbolic and numerical differentiation, particularly for complex algorithms and mathematical functions [2]. AD is critical for training neural networks, as it computes gradients required for backpropagation [3], [4]. Advancements in deep learning were significantly facilitated by AD's efficiency in computing gradients for complex loss functions [5], enabling the training of large-scale models such as modern Large Language Models [6]. Beyond the confines of machine learning in fields such as atmospheric sciences and oceanography, AD is crucial for sensitivity analysis [7], parameter estimation [8], and data assimilation [9]. Furthermore, many recent scientific algorithms have integrated machine learning models within their systems [10], making AD more relevant Figure 1: DaCe AD Contribution Overview.
arXiv.org Artificial Intelligence
Sep-3-2025
- Country:
- Europe
- Switzerland > Zürich
- Zürich (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Switzerland > Zürich
- North America > United States (0.05)
- Europe
- Genre:
- Research Report (0.65)
- Technology: