pseudocoreset
- Asia > China > Guangdong Province > Shenzhen (0.05)
- Asia > China > Sichuan Province > Chengdu (0.04)
- Asia > China > Heilongjiang Province > Harbin (0.04)
Function Space Bayesian Pseudocoreset for Bayesian Neural Networks
A Bayesian pseudocoreset is a compact synthetic dataset summarizing essential information of a large-scale dataset and thus can be used as a proxy dataset for scalable Bayesian inference. Typically, a Bayesian pseudocoreset is constructed by minimizing a divergence measure between the posterior conditioning on the pseudocoreset and the posterior conditioning on the full dataset. However, evaluating the divergence can be challenging, particularly for the models like deep neural networks having high-dimensional parameters.
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.88)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.67)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.69)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.68)
Function Space Bayesian Pseudocoreset for Bayesian Neural Networks
A Bayesian pseudocoreset is a compact synthetic dataset summarizing essential information of a large-scale dataset and thus can be used as a proxy dataset for scalable Bayesian inference. Typically, a Bayesian pseudocoreset is constructed by minimizing a divergence measure between the posterior conditioning on the pseudocoreset and the posterior conditioning on the full dataset. However, evaluating the divergence can be challenging, particularly for the models like deep neural networks having high-dimensional parameters.
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.88)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.67)
A Derivation of Eq . 9
We already report the t-SNE visualization of ByPE-V AE and standard V AE in Figure. Figure 6: t-SNE visualization of learned latent representations, colored by labels. Second, we give more generated samples in Fig.8, among Figure 7: Random samples drawn from ByPE-V AEs trained on different datasets. Figure 8: Samples generated by ByPE-V AE based on the same pseudodata point in each plate. In section 5.2, We only report the KNN results of MNIST and Fashion MNIST in the Figure 1.
- Asia > China > Guangdong Province > Shenzhen (0.05)
- Asia > China > Sichuan Province > Chengdu (0.04)
- Asia > China > Heilongjiang Province > Harbin (0.04)