Well File:

Instance Based Approximations to Profile Maximum Likelihood Nima Anari Moses Charikar Kirankumar Shiragur Stanford University

Neural Information Processing Systems

In this paper we provide a new efficient algorithm for approximately computing the profile maximum likelihood (PML) distribution, a prominent quantity in symmetric property estimation. We provide an algorithm which matches the previous best known efficient algorithms for computing approximate PML distributions and improves when the number of distinct observed frequencies in the given instance is small. We achieve this result by exploiting new sparsity structure in approximate PML distributions and providing a new matrix rounding algorithm, of independent interest. Leveraging this result, we obtain the first provable computationally efficient implementation of PseudoPML, a general framework for estimating a broad class of symmetric properties. Additionally, we obtain efficient PML-based estimators for distributions with small profile entropy, a natural instance-based complexity measure. Further, we provide a simpler and more practical PseudoPML implementation that matches the best-known theoretical guarantees of such an estimator and evaluate this method empirically.


Fair Adaptive Experiments

Neural Information Processing Systems

Randomized experiments have been the gold standard for assessing the effectiveness of a treatment, policy, or intervention, spanning various fields, including social sciences, biomedical studies, and e-commerce. The classical complete randomization approach assigns treatments based on a pre-specified probability and may lead to inefficient use of data. Adaptive experiments improve upon complete randomization by sequentially learning and updating treatment assignment probabilities using accrued evidence during the experiment. Hence, they can help achieve efficient data use and higher estimation efficiency. However, their application can also raise fairness and equity concerns, as assignment probabilities may vary drastically across groups of participants.


Latent Template Induction with Gumbel-CRFs Appendix

Neural Information Processing Systems

As noted in the main paper, the baseline estimator PM-MRF also involve in-depth exploitation of the structure of models and gradients, thus being quite competitive. Here we give a detailed discussion. Papandreou and Yuille [4] proposed the Perturb-and-MAP Random Field, an efficient sampling method for general Markov Random Field. This MAP แบ‘ from the perturbed ฮฆ can be viewed as a biased sample from the original MRF. This method is much faster than the MCMC sampler when an efficient MAP algorithm exists.


Latent Template Induction with Gumbel-CRFs, Bin Bi

Neural Information Processing Systems

Learning to control the structure of sentences is a challenging problem in text generation. Existing work either relies on simple deterministic approaches or RL-based hard structures. We explore the use of structured variational autoencoders to infer latent templates for sentence generation using a soft, continuous relaxation in order to utilize reparameterization for training. Specifically, we propose a Gumbel-CRF, a continuous relaxation of the CRF sampling algorithm using a relaxed Forward-Filtering Backward-Sampling (FFBS) approach. As a reparameterized gradient estimator, the Gumbel-CRF gives more stable gradients than score-function based estimators. As a structured inference network, we show that it learns interpretable templates during training, which allows us to control the decoder during testing. We demonstrate the effectiveness of our methods with experiments on data-to-text generation and unsupervised paraphrase generation.




Why I've converted to using HP's Omen AI for serious FPS gains

PCWorld

While visiting HP's Omen gaming exhibit at the company's Amplify Conference in Nashville Tennessee this week, I realized something: I've been optimizing my PC's performance for Counter-Strike 2 all wrong! What I'm doing is painstakingly combing through my hardware settings, OS settings, and game settings in a confusing and sometimes panic-ridden mind muddle in the hope I'll achieve an actual uplift in FPS. That's where HP's Omen AI comes in. Originally unveiled at CES 2025 Las Vegas last January, Omen AI made another appearance at the conference in Nashville, this time showing off some impressive FPS gains on both laptops and desktop PCs. In one of the promotional videos, Omen AI boosted a laptop's performance from 82 FPS to 111 FPS.


c164bbc9d6c72a52c599bbb43d8db8e1-Paper.pdf

Neural Information Processing Systems

Deep neural networks have achieved impressive performance in many areas. Designing a fast and provable method for training neural networks is a fundamental question in machine learning. The classical training method requires paying ฮฉ(mnd) cost for both forward computation and backward computation, where m is the width of the neural network, and we are given n training points in d-dimensional space.


Q-Distribution guided Q-learning for offline reinforcement learning: Uncertainty penalized Q-value via consistency model

Neural Information Processing Systems

"Distribution shift" is the main obstacle to the success of offline reinforcement learning. A learning policy may take actions beyond the behavior policy's knowledge, referred to as Out-of-Distribution (OOD) actions. The Q-values for these OOD actions can be easily overestimated. As a result, the learning policy is biased by using incorrect Q-value estimates. One common approach to avoid Q-value overestimation is to make a pessimistic adjustment.