banp
492114f6915a69aa3dd005aa4233ef51-Supplemental.pdf
A deterministic path uses a self-attention and cross-attention to summarize contexts. B.1 1DRegression Architectures For models without attention (CNP, NP, BNP), we set`pre = 4,`post = 2,`dec = 3,dh = 128. For NP we set dz = 128. For Student-t noise, we addedε γ T(2.1) to the curves generated from GP with RBF kernel, whereT(2.1) is a Student'st distribution with degree of freedom2.1 and γ Unif(0,0.15). After realizing them, the prior functions are used to optimize via Bayesian optimization.
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
- (2 more...)
Supplementary Material for Bootstrapping Neural Processes Juho Lee 1,2, Y oonho Lee
We sampled 100 GP prior functions from zero mean and unit variance. After realizing them, the prior functions are used to optimize via Bayesian optimization. All the experiments are implemented with [8]. Same as Appendix B.1, except that all the models were trained for 200 The other details are the same as in Appendix B.1. Seen classes (0-9) Unseen classes (10-46) t -noise CE sharpness CE Sharpness CE Sharpness CNP 0.448 We also measure the sharpness [10] which essentially is a average prediction variance.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > Canada (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
- (2 more...)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > New York (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- (3 more...)
Bootstrapping Neural Processes
Lee, Juho, Lee, Yoonho, Kim, Jungtaek, Yang, Eunho, Hwang, Sung Ju, Teh, Yee Whye
Unlike in the traditional statistical modeling for which a user typically hand-specify a prior, Neural Processes (NPs) implicitly define a broad class of stochastic processes with neural networks. Given a data stream, NP learns a stochastic process that best describes the data. While this "data-driven" way of learning stochastic processes has proven to handle various types of data, NPs still rely on an assumption that uncertainty in stochastic processes is modeled by a single latent variable, which potentially limits the flexibility. To this end, we propose the Boostrapping Neural Process (BNP), a novel extension of the NP family using the bootstrap. The bootstrap is a classical data-driven technique for estimating uncertainty, which allows BNP to learn the stochasticity in NPs without assuming a particular form. We demonstrate the efficacy of BNP on various types of data and its robustness in the presence of model-data mismatch.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > New York (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- (3 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Mathematical & Statistical Methods (0.95)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.92)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.51)
- (3 more...)