Goto

Collaborating Authors


Sample Complexity for Quadratic Bandits: Hessian Dependent Bounds and Optimal Algorithms

Neural Information Processing Systems

In stochastic zeroth-order optimization, a problem of practical relevance is understanding how to fully exploit the local geometry of the underlying objective function. We consider a fundamental setting in which the objective function is quadratic, and provide the first tight characterization of the optimal Hessian-dependent sample complexity.



Towards Text Generation with Adversarially Learned Neural Outlines

Neural Information Processing Systems

Recent progress in deep generative models has been fueled by two paradigms - autoregressive and adversarial models. We propose a combination of both approaches with the goal of learning generative models of text. Our method first produces a high-level sentence outline and then generates words sequentially, conditioning on both the outline and the previous outputs. We generate outlines with an adversarial model trained to approximate the distribution of sentences in a latent space induced by general-purpose sentence encoders. This provides strong, informative conditioning for the autoregressive stage. Our quantitative evaluations suggests that conditioning information from generated outlines is able to guide the autoregressive model to produce realistic samples, comparable to maximum-likelihood trained language models, even at high temperatures with multinomial sampling. Qualitative results also demonstrate that this generative procedure yields natural-looking sentences and interpolations.


Fox News AI Newsletter: Expert warns just 20 cloud images can make an AI deepfake video of your child

FOX News

Texas high school student Elliston Berry joins'Fox & Friends' to discuss the House's passage of a new bill that criminalizes the sharing of non-consensual intimate images, including content created with artificial intelligence. Welcome to Fox News' Artificial Intelligence newsletter with the latest AI technology advancements. IN TODAY'S NEWSLETTER: - Peek-a-boo, big tech sees you: Expert warns just 20 cloud images can make an AI deepfake video of your child - 5 AI terms you keep hearing and what they actually mean - AI to monitor NYC subway safety as crime concerns rise First Lady Melania Trump, joined by U.S. President Donald Trump, delivers remarks before President Trump signed the TAKE IT DOWN Act into law in the Rose Garden of the White House on May 19, 2025 in Washington, DC. The first lady made the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks (TAKE IT DOWN) Act a priority, traveling to Capitol Hill to lobby lawmakers and show her support for the legislation, which addresses non-consensual intimate imagery, or "revenge porn," and artificial intelligence deepfakes posted online and to social media. DEEPFAKE DANGERS: Parents love capturing their kids' big moments, from first steps to birthday candles.


6e2986deda273d8fb903342841fcc4dc-Paper-Conference.pdf

Neural Information Processing Systems

We study indiscriminate poisoning for linear learners where an adversary injects a few crafted examples into the training data with the goal of forcing the induced model to incur higher test error. Inspired by the observation that linear learners on some datasets are able to resist the best known attacks even without any defenses, we further investigate whether datasets can be inherently robust to indiscriminate poisoning attacks for linear learners. For theoretical Gaussian distributions, we rigorously characterize the behavior of an optimal poisoning attack, defined as the poisoning strategy that attains the maximum risk of the induced model at a given poisoning budget. Our results prove that linear learners can indeed be robust to indiscriminate poisoning if the class-wise data distributions are well-separated with low variance and the size of the constraint set containing all permissible poisoning points is also small. These findings largely explain the drastic variation in empirical attack performance of the state-of-the-art poisoning attacks on linear learners across benchmark datasets, making an important initial step towards understanding the underlying reasons some learning tasks are vulnerable to data poisoning attacks.


Enhancing Consistency-Based Image Generation via Adversarially-Trained Classification and Energy-Based Discrimination

Neural Information Processing Systems

The recently introduced Consistency models pose an efficient alternative to diffusion algorithms, enabling rapid and good quality image synthesis. These methods overcome the slowness of diffusion models by directly mapping noise to data, while maintaining a (relatively) simpler training. Consistency models enable a fast one-or few-step generation, but they typically fall somewhat short in sample quality when compared to their diffusion origins. In this work we propose a novel and highly effective technique for post-processing Consistency-based generated images, enhancing their perceptual quality. Our approach utilizes a joint classifierdiscriminator model, in which both portions are trained adversarially. While the classifier aims to grade an image based on its assignment to a designated class, the discriminator portion of the very same network leverages the softmax values to assess the proximity of the input image to the targeted data manifold, thereby serving as an Energy-based Model. By employing example-specific projected gradient iterations under the guidance of this joint machine, we refine synthesized images and achieve an improved FID scores on the ImageNet 64x64 dataset for both Consistency-Training and Consistency-Distillation techniques.