Goto

Collaborating Authors

 variational posterior


Early-stopped aggregation: Adaptive inference with computational efficiency

Ohn, Ilsang, Fan, Shitao, Jun, Jungbin, Lin, Lizhen

arXiv.org Machine Learning

When considering a model selection or, more generally, an aggregation approach for adaptive statistical inference, it is often necessary to compute estimators over a wide range of model complexities including unnecessarily large models even when the true data-generating process is relatively simple, due to the lack of prior knowledge. This requirement can lead to substantial computational inefficiency. In this work, we propose a novel framework for efficient model aggregation called the early-stopped aggregation (ESA): instead of computing and aggregating estimators for all candidate models, we compute only a small number of simpler ones using an early-stopping criterion and aggregate only these for final inference. Our framework is versatile and applies to both Bayesian model selection, in particular, within the variational Bayes framework, and frequentist estimation, including a general penalized estimation setting. We investigate adaptive optimal property of the ESA approach across three learning paradigms. We first show that ESA achieves optimal adaptive contraction rates in the variational Bayes setting under mild conditions. We extend this result to variational empirical Bayes, where prior hyperparameters are chosen in a data-dependent manner. In addition, we apply the ESA approach to frequentist aggregation including both penalization-based and sample-splitting implementations, and establish corresponding theory. As we demonstrate, there is a clear unification between early-stopped Bayes and frequentist penalized aggregation, with a common "energy" functional comprising a data-fitting term and a complexity-control term that drives both procedures. We further present several applications and numerical studies that highlight the efficiency and strong performance of the proposed approach.


Appendix for "Episodic Multi-Task Learning with Heterogeneous Neural Processes "

Neural Information Processing Systems

Appendix for "Episodic Multi-T ask Learning with Heterogeneous Neural Processes" In this section, we list frequently asked questions from researchers who help proofread this manuscript. As shown in Table 1, we use "Heterogeneous tasks" to distinguish the different branches of multi-task Meanwhile, "Episodic training" is used to describe the data-feeding strategy. Thus, "Heterogeneous tasks" is not available here (-). In episodic multi-task learning, we restrict the scope of the problem to the case where tasks in the same episode are related and share the same target space. This also implies that tasks with the same target space are related.






Bi-levelScoreMatchingforLearningEnergy-based LatentVariableModels

Neural Information Processing Systems

However, it remains largely open to learn energy-based latent variable models (EBLVMs), exceptsomespecialcases. Thispaperpresents abi-levelscorematching (BiSM) method to learn EBLVMs with general structures by reformulating SM as a bilevel optimization problem. The higher level introduces a variational posterior of the latent variables and optimizes a modified SM objective, and the lower level optimizes the variational posterior to fit the true posterior.




VariationalBayesianMonteCarlo withNoisyLikelihoods

Neural Information Processing Systems

Intheoriginalformulation, observations are assumed to be exact (non-noisy), so the GP likelihood only included a small observation noise σ2obs for numerical stability [32].