SupplementaryMaterials: Acomposable machine-learningapproachforsteady-state simulationsonhigh-resolutiongrids
–Neural Information Processing Systems
Finally, we expand on the computational performance of CoMLSim in Section E and provide details of reproducibilityinSectionF. In this section, we will provide details about the typical network architectures used in CoMLSim followed bythetraining mechanics. CNN-based encoders and decoders are employed here toachievethis compression because subdomains consist of structured data representations. In the encoder network, we use a series of convolution and max-pooling layers to extract global features from thesolution. If the PDE conditions are uniform, the magnitude can simply be considered as an encoding for a given subdomain. Since latent vectors don't have a spatial representation, DNN-based encoder and decoders areemployedtocompress them. Thedomain isdiscretized intoafinite number ofcomputational elements, using techniques suchasFinite Difference Method (FDM), Finite Volume Method (FVM) and FiniteElementMethod(FEM). 3 Similar to traditional PDE solvers, the first step in the CoMLSim is to decompose the computational domain into smaller subdomains.
Neural Information Processing Systems
Feb-9-2026, 16:47:18 GMT