Not enough data to create a plot.
Try a different view from the menu above.
Amnesia as a Catalyst for Enhancing Black Box Pixel Attacks in Image Classification and Object Detection
It is well known that query-based attacks tend to have relatively higher successrates in adversarial black-box attacks. While research on black-box attacks is activelybeing conducted, relatively few studies have focused on pixel attacks thattarget only a limited number of pixels. In image classification, query-based pixelattacks often rely on patches, which heavily depend on randomness and neglectthe fact that scattered pixels are more suitable for adversarial attacks. Moreover, tothe best of our knowledge, query-based pixel attacks have not been explored in thefield of object detection. To address these issues, we propose a novel pixel-basedblack-box attack called Remember and Forget Pixel Attack using ReinforcementLearning(RFPAR), consisting of two main components: the Remember and Forgetprocesses. RFPAR mitigates randomness and avoids patch dependency byleveraging rewards generated through a one-step RL algorithm to perturb pixels.RFPAR effectively creates perturbed images that minimize the confidence scoreswhile adhering to limited pixel constraints.
Sample Complexity of Goal-Conditioned Hierarchical Reinforcement Learning
Hierarchical Reinforcement Learning (HRL) algorithms can perform planning at multiple levels of abstraction. Empirical results have shown that state or temporal abstractions might significantly improve the sample efficiency of algorithms. Yet, we still do not have a complete understanding of the basis of those efficiency gains nor any theoretically grounded design rules. In this paper, we derive a lower bound on the sample complexity for the considered class of goal-conditioned HRL algorithms. The proposed lower bound empowers us to quantify the benefits of hierarchical decomposition and leads to the design of a simple Q-learning-type algorithm that leverages hierarchical decompositions.
Google confirms plan to bake Gemini AI directly into Chrome
Like most other tech companies, Google is investing heavily in the development of AI models and trying to incorporate AI into anything and everything in their portfolio. The latest endeavor involves Google integrating its Gemini AI assistant into its world-popular Chrome browser. What was once a rumor back in March has now been confirmed by Google, who intends to incorporate its Gemini AI assistant directly into Chrome, reports Windows Latest. We'll probably learn exactly how it will all work at Google I/O 2025, which will be held on May 20 and 21. From what we know so far based on leaks and rumors, the new feature is called GLIC (which stands for "Gemini Live in Chrome") and it comes with a new "Glic" section in Chrome's settings page.
Are aligned neural networks adversarially aligned?
Large language models are now tuned to align with the goals of their creators, namely to be "helpful and harmless." These models should respond helpfully to user questions, but refuse to answer requests that could cause harm. However, adversarial users can construct inputs which circumvent attempts at alignment. In this work, we study adversarial alignment, and ask to what extent these models remain aligned when interacting with an adversarial user who constructs worst-case inputs (adversarial examples). These inputs are designed to cause the model to emit harmful content that would otherwise be prohibited.We show that existing NLP-based optimization attacks are insufficiently powerful to reliably attack aligned text models: even when current NLP-based attacks fail, we can find adversarial inputs with brute force.
Dream the Impossible: Outlier Imagination with Diffusion Models
Utilizing auxiliary outlier datasets to regularize the machine learning model has demonstrated promise for out-of-distribution (OOD) detection and safe prediction. Due to the labor intensity in data collection and cleaning, automating outlier data generation has been a long-desired alternative. Despite the appeal, generating photo-realistic outliers in the high dimensional pixel space has been an open challenge for the field. To tackle the problem, this paper proposes a new framework Dream-OOD, which enables imagining photo-realistic outliers by way of diffusion models, provided with only the in-distribution (ID) data and classes. Specifically, Dream-OOD learns a text-conditioned latent space based on ID data, and then samples outliers in the low-likelihood region via the latent, which can be decoded into images by the diffusion model.
ResMem: Learn what you can and memorize the rest
The impressive generalization performance of modern neural networks is attributed in part to their ability to implicitly memorize complex training patterns.Inspired by this, we explore a novel mechanism to improve model generalization via explicit memorization.Specifically, we propose the residual-memorization (ResMem) algorithm, a new method that augments an existing prediction model (e.g., a neural network) by fitting the model's residuals with a nearest-neighbor based regressor.The final prediction is then the sum of the original model and the fitted residual regressor.By construction, ResMem can explicitly memorize the training labels.We start by formulating a stylized linear regression problem and rigorously show that ResMem results in a more favorable test risk over a base linear neural network.Then, we empirically show that ResMem consistently improves the test set generalization of the original prediction model across standard vision and natural language processing benchmarks.
Colts delete controversial schedule-release video, say Microsoft rights were violated
Emmanuel Acho, LeSean McCoy, James Jones and Chase Daniel react to the NFL's announcement of the Philadelphia Eagles hosting the Dallas Cowboys to kick off the 2025 NFL season. The Indianapolis Colts came under fire on Wednesday night for a since-deleted social media video that was intended to creatively reveal the team's 2025 game schedule. The clip, which was animated in the style of the Microsoft-owned video game Minecraft, opened with a segment previewing the team's Week 1 game against the Miami Dolphins. In it, Dolphins star Tyreek Hill was depicted as a dolphin and was then approached by a Coast Guard boat blaring a police siren, with a police officer glaring at Hill. Hill was arrested in September in a widely publicized controversy that featured bodycam footage of the wide receiver being pinned to the ground by police while put in handcuffs.
What Planning Problems Can A Relational Neural Network Solve?
Goal-conditioned policies are generally understood to be "feed-forward" circuits, in the form of neural networks that map from the current state and the goal specification to the next action to take. However, under what circumstances such a policy can be learned and how efficient the policy will be are not well understood. In this paper, we present a circuit complexity analysis for relational neural networks (such as graph neural networks and transformers) representing policies for planning problems, by drawing connections with serialized goal regression search (S-GRS). We show that there are three general classes of planning problems, in terms of the growth of circuit width and depth as a function of the number of objects and planning horizon, providing constructive proofs. We also illustrate the utility of this analysis for designing neural networks for policy learning.
MultiFusion: Fusing Pre-Trained Models for Multi-Lingual, Multi-Modal Image Generation
The recent popularity of text-to-image diffusion models (DM) can largely be attributed to the intuitive interface they provide to users. The intended generation can be expressed in natural language, with the model producing faithful interpretations of text prompts. However, expressing complex or nuanced ideas in text alone can be difficult. To ease image generation, we propose MultiFusion that allows one to express complex and nuanced concepts with arbitrarily interleaved inputs of multiple modalities and languages. Our experimental results demonstrate the efficient transfer of capabilities from individual modules to the downstream model.
PID-Inspired Inductive Biases for Deep Reinforcement Learning in Partially Observable Control Tasks
Deep reinforcement learning (RL) has shown immense potential for learning to control systems through data alone. However, one challenge deep RL faces is that the full state of the system is often not observable. When this is the case, the policy needs to leverage the history of observations to infer the current state. At the same time, differences between the training and testing environments makes it critical for the policy not to overfit to the sequence of observations it sees at training time. As such, there is an important balancing act between having the history encoder be flexible enough to extract relevant information, yet be robust to changes in the environment.