Towards Sharp Minimax Risk Bounds for Operator Learning
Adcock, Ben, Maier, Gregor, Parhi, Rahul
A new paradigm in machine learning for scientific computing is focused on designing learning algorithms and methods for continuum problems. This paradigm is referred to as operator learning and has received considerable interest in the last few years [5,7,18,20,23-25,27,30,34,36]. The basic task may be posed as learning a map between infinite-dimensional function spaces, i.e., learning an operator F: X Y, where, for example, X and Y are real, separable Hilbert spaces. Operator learning naturally arises in many scientific problems where one wants to learn how a continuum model, often described by partial differential equations (PDEs), maps inputs, such as parameters or boundary conditions, to outputs, such as states or observables. A prototypical example to keep in mind is learning parameter-to-solution maps of parametric PDEs [1,2,11]. In contrast to more classical surrogate modeling, which typically focuses on learning finite-dimensional parameter-to-solution maps for some fixed discretization, operator learning directly aims to learn/approximate the continuum map F: X Y itself. Thus, the inputs and outputs are functions (not vectors) and the goal is to directly design discretization-invariant methods [7,23]. From a statistical perspective, this naturally leads to a nonparametric regression problem in which both the object of interest (the operator) and the observations (finite number of noisy samples) are infinite-dimensional.
Dec-22-2025
- Country:
- Europe
- Germany > North Rhine-Westphalia
- Arnsberg Region > Dortmund (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Germany > North Rhine-Westphalia
- North America > United States
- New York (0.04)
- Pennsylvania > Philadelphia County
- Philadelphia (0.04)
- Europe
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Education (0.54)
- Technology: