Goto

Collaborating Authors

 inthisexperiment





Appendix: InverseLearningofSymmetries 1 Model

Neural Information Processing Systems

To do so, we describe the encoder termI(Z;X), which is calculated as the Kullback-Leibler divergence(DKL)betweenpφ(z|x)andp(z). However upon this point, we have only learned the parameters ofthe Gaussian distribution. Thenaiveapproach requires estimating the joint distribution of the variables. Anumberofmethodsestimating lower bounds of mutual information exist [1, 11]. Such bounds, however, suffer from inherent statistical limitations [8].


SupplementaryMaterials: Acomposable machine-learningapproachforsteady-state simulationsonhigh-resolutiongrids

Neural Information Processing Systems

Finally, we expand on the computational performance of CoMLSim in Section E and provide details of reproducibilityinSectionF. In this section, we will provide details about the typical network architectures used in CoMLSim followed bythetraining mechanics. CNN-based encoders and decoders are employed here toachievethis compression because subdomains consist of structured data representations. In the encoder network, we use a series of convolution and max-pooling layers to extract global features from thesolution. If the PDE conditions are uniform, the magnitude can simply be considered as an encoding for a given subdomain. Since latent vectors don't have a spatial representation, DNN-based encoder and decoders areemployedtocompress them. Thedomain isdiscretized intoafinite number ofcomputational elements, using techniques suchasFinite Difference Method (FDM), Finite Volume Method (FVM) and FiniteElementMethod(FEM). 3 Similar to traditional PDE solvers, the first step in the CoMLSim is to decompose the computational domain into smaller subdomains.


where,toensurefeasibility,thestepsizeisgivenby γ=min 1, min

Neural Information Processing Systems

In this case, points on the boundary of K have one or more zero coordinates. In contrast, softmax(s) exp(s)is always strictly inside the simplex. Alternatively, observe that it is enough to find Z. In this section, we present the active set method [63, Chapters 16.4 & 16.5] as applied to the SparseMAPoptimizationproblem(Eq.4)[13]. Denote the solution of Eq. 13, (extended with zeroes), by ˆξ |Z|.


6bb56208f672af0dd65451f869fedfd9-Supplemental.pdf

Neural Information Processing Systems

In most applications,E " Y to begin with (ally are potential maximizer for some vector of costs, otherwise theyare not included in the set), and all points inY havepositive mass. Ittherefore also satisfies this property. We recall that we assume thatθ yields a unique maximum to the linear program onC. As a consequence, all convergent subsequences ofyn converge to the same limity pθq: it is the unique accumulation point of this sequence.Itfollowsdirectlythat yn convergesto y pθq,asitlivesinacompactset,whichyieldsthe desired result. Using different reference vectorsv yield different perturbed operations, andv " p1,2,...,dq is commonlyused.