Not enough data to create a plot.
Try a different view from the menu above.
Nested Variational Inference Hao Wu Jan-Willem van de Meent
We develop nested variational inference (NVI), a family of methods that learn proposals for nested importance samplers by minimizing an forward or reverse KL divergence at each level of nesting. NVI is applicable to many commonly-used importance sampling strategies and provides a mechanism for learning intermediate densities, which can serve as heuristics to guide the sampler. Our experiments apply NVI to (a) sample from a multimodal distribution using a learned annealing path (b) learn heuristics that approximate the likelihood of future observations in a hidden Markov model and (c) to perform amortized inference in hierarchical deep generative models. We observe that optimizing nested objectives leads to improved sample quality in terms of log average weight and effective sample size.
This beast of a robot vacuum is heavily discounted at Amazon -- save 700 on the Roborock Qrevo Master
SAVE 700: As of May 22, the Roborock Qrevo Master is on sale for 899.99 at Amazon. As of May 22, the Roborock Qrevo Master robot vacuum and mop is on sale for 44% off, now down to 899.99. And with this vacuum, you're getting a whole lot to be excited about. The Qrevo Master handles both vacuuming and mopping, with minimal effort required on your end. Its self-emptying dock means up to seven weeks of hands-free cleaning, and with 10,000Pa suction and the Carpet Boost System, it's seriously effective, removing up to 99% of hair from carpets.
Precise expressions for random projections: Low-rank approximation and randomized Newton
It is often desirable to reduce the dimensionality of a large dataset by projecting it onto a low-dimensional subspace. Matrix sketching has emerged as a powerful technique for performing such dimensionality reduction very efficiently. Even though there is an extensive literature on the worst-case performance of sketching, existing guarantees are typically very different from what is observed in practice. We exploit recent developments in the spectral analysis of random matrices to develop novel techniques that provide provably accurate expressions for the expected value of random projection matrices obtained via sketching. These expressions can be used to characterize the performance of dimensionality reduction in a variety of common machine learning tasks, ranging from low-rank approximation to iterative stochastic optimization. Our results apply to several popular sketching methods, including Gaussian and Rademacher sketches, and they enable precise analysis of these methods in terms of spectral properties of the data. Empirical results show that the expressions we derive reflect the practical performance of these sketching methods, down to lower-order effects and even constant factors.
Russia-Ukraine war: List of key events, day 1,183
Russia's Defence Ministry said air defences shot down 105 Ukrainian drones over Russian regions, including 35 over the Moscow region, after the ministry said a day earlier that it had downed more than 300 Ukrainian drones. Kherson Governor Oleksandr Prokudin said one person was killed in a Russian artillery attack on the region. H said over the past day, 35 areas in Kherson, including Kherson city, came under artillery shelling and air attacks, wounding 11 people. Ukrainian President Zelenskyy said the "most intense situation" is in the Donetsk region, and the army is continuing "active operations in the Kursk and Belgorod regions". Russia's Defence Ministry said air defences shot down 105 Ukrainian drones over Russian regions, including 35 over the Moscow region, after the ministry said a day earlier that it had downed more than 300 Ukrainian drones.
d35b05a832e2bb91f110d54e34e2da79-AuthorFeedback.pdf
We thank all the reviewers for their feedback! Our paper formalizes a data acquisition problem when one cannot verify the true labels of the collected data. The writing of our submission focused more on the basics to ensure clarity for the general, diverse Neurips readers. Most of the technical results were either deferred to the appendix or compressed to fit in the page limit. One of our major technical contributions is the explicit sensitivity guarantee of the peer-prediction style mechanisms.
Generalizing Bayesian Optimization with Decision-theoretic Entropies Willie Neiswanger
Bayesian optimization (BO) is a popular method for efficiently inferring optima of an expensive black-box function via a sequence of queries. Existing informationtheoretic BO procedures aim to make queries that most reduce the uncertainty about optima, where the uncertainty is captured by Shannon entropy. However, an optimal measure of uncertainty would, ideally, factor in how we intend to use the inferred quantity in some downstream procedure. In this paper, we instead consider a generalization of Shannon entropy from work in statistical decision theory [13, 39], which contains a broad class of uncertainty measures parameterized by a problem-specific loss function corresponding to a downstream task. We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures such as knowledge gradient, expected improvement, and entropy search. We then show how alternative choices for the loss yield a flexible family of acquisition functions that can be customized for use in novel optimization settings.
MetaTeacher: Coordinating Multi-Model Domain Adaptation for Medical Image Classification (Appendix)
We follow the derivation route in [7] except the coordinating weight part. According to Eq.(7), we update θ According to the chain rule, Eq.(15) can be written as: For the right part of Eq.(16), it follows that [ ( Figure 3: The Class Activation Map (CAM) [10] is used to perform visual ablation analysis on a chest x-ray image in Open-i dataset. The background color is blue, with red or yellow representing the disease location. The number on the top left corner of each image is the predicted probability for the corresponding disease. We visualize the domain adaptation performance on the transfer scenario NIH-CXR14, CheXpert, MIMIC-CXR to Open-i. The visualization sample in the Open-i is suffering from Atelecsis and Effusion disease.