Learning Functional Transduction: S.I. Contents
–Neural Information Processing Systems
We propose below the proofs of the results presented in the main text. RKBS developed in (Zhang et al., 2009; Song et al., 2013) to develop the notion of vector-valued (Giles, 1967). " 0, @ j ď n, @ u P U (9) which allows us to say that O P RKBS (Corollary 3.2 of Zhang (2013)) that we recall hereafter: We first define for any linear operator We show our result in the case J=1 and can be directly extended to any cardinality J. Specifically, we tested three expressions: Exp. The two first expressions yield similar result in the ADR experiment at an equal compute cost. We also tried a'branch' and'trunk' networks formulation of the model as in DeepONet (Lu T able S.2: Summary of the architectural hyperparameters used to build the Transducer in the four experiments. 'Depth' corresponds to network number of layers, 'MLP dim' to the dimensionality of the hidden layer As stated, we used for all experiments, the same meta-training procedure. T able S.3: Summary of the meta-learning hyperparameters used to meta-train the Transducer in our four Figure S.1: Examples of sampled functions δ p xq and ν px q used to build operators O We train Tranducers for 200K gradient steps. Flow library (Holl et al., 2020) that allows for batched and differentiable simulations of fluid dynamics Figure S.5: Magnitude of the complex coefficients of the Fourier transform of an exemple pair of input and In order to tackle the high-resolution climate modeling experiment, we take inspiration from Pathak et al. (2022), which combines neural operators with the patch splitting L " 12, in order to match number of trainable parameters.
Neural Information Processing Systems
Oct-9-2025, 10:40:09 GMT