combining
- Europe > Denmark > Capital Region > Kongens Lyngby (0.14)
- North America > United States > Virginia (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Europe > Switzerland (0.04)
- North America > United States > Texas > Travis County > Austin (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.04)
- Asia > Middle East > Jordan (0.04)
Error analysis for the deep Kolmogorov method
Cîmpean, Iulian, Do, Thang, Gonon, Lukas, Jentzen, Arnulf, Popescu, Ionel
The deep Kolmogorov method is a simple and popular deep learning based method for approximating solutions of partial differential equations (PDEs) of the Kolmogorov type. In this work we provide an error analysis for the deep Kolmogorov method for heat PDEs. Specifically, we reveal convergence with convergence rates for the overall mean square distance between the exact solution of the heat PDE and the realization function of the approximating deep neural network (DNN) associated with a stochastic optimization algorithm in terms of the size of the architecture (the depth/number of hidden layers and the width of the hidden layers) of the approximating DNN, in terms of the number of random sample points used in the loss function (the number of input-output data pairs used in the loss function), and in terms of the size of the optimization error made by the employed stochastic optimization method.
- Asia > China > Guangdong Province > Shenzhen (0.04)
- Europe > Romania > București - Ilfov Development Region > Municipality of Bucharest > Bucharest (0.04)
- Asia > China > Hong Kong (0.04)
- (4 more...)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
A Additional Experiments In this section, we present additional experiments which shed more light on the performance of X
Section 4.1, we consider =3 . In Section 4.2 and Appendix A.1, we examine the performance of different algorithms for the In Figure 5 the performance of both greedy heuristics is very similar under the two one-sided losses. We observe that the objective values are no longer uniformly positive, and are no longer monotonically increasing in the target size. In this section, we present the proofs of all theoretical results. The following lemma shows the submodularity of the objective U in the selection S . If (,M) is convex then U ( S) is submodular in S .
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)