dn 1
Country:
- North America > United States > Iowa > Johnson County > Iowa City (0.14)
- Asia > China > Hong Kong (0.05)
- Asia > China > Hubei Province > Wuhan (0.04)
Technology: Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.69)
Supplementary Material for " Non-Asymptotic Error Bounds for Bidirectional GANs "
Department of Mathematics, The Hong Kong University of Science and Technology Clear Water Bay, Hong Kong, China yyangdc@connect.ust.hk In this supplementary material, we first prove Theorem 3.2, and then Theorems 3.1 and 3.3. We use σ to denote the ReLU activation function in neural networks, which is σ (x) = max {x, 0}. We use notation O () and O () to express the order of function slightly differently, where O () omits the universal constant not relying on d while O () omits the constant related to d . So far, most of the related works assume that the target distribution µ is supported on a compact set, for example Chen et al. (2020) and Liang (2020).
Country:
- Asia > China > Hong Kong (0.45)
- North America > United States > Iowa > Johnson County > Iowa City (0.14)
- Asia > China > Hubei Province > Wuhan (0.04)
Technology: Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.69)