Supplementary Material for " Non-Asymptotic Error Bounds for Bidirectional GANs "

Neural Information Processing Systems 

Department of Mathematics, The Hong Kong University of Science and Technology Clear Water Bay, Hong Kong, China yyangdc@connect.ust.hk In this supplementary material, we first prove Theorem 3.2, and then Theorems 3.1 and 3.3. We use σ to denote the ReLU activation function in neural networks, which is σ (x) = max {x, 0}. We use notation O () and O () to express the order of function slightly differently, where O () omits the universal constant not relying on d while O () omits the constant related to d . So far, most of the related works assume that the target distribution µ is supported on a compact set, for example Chen et al. (2020) and Liang (2020).

Similar Docs  Excel Report  more

TitleSimilaritySource
None found