Not enough data to create a plot.
Try a different view from the menu above.
Adversarial Reprogramming Revisited
Adversarial reprogramming, introduced by Elsayed, Goodfellow, and Sohl-Dickstein, seeks to repurpose a neural network to perform a different task, by manipulating its input without modifying its weights. We prove that two-layer ReLU neural networks with random weights can be adversarially reprogrammed to achieve arbitrarily high accuracy on Bernoulli data models over hypercube vertices, provided the network width is no greater than its input dimension. We also substantially strengthen a recent result of Phuong and Lampert on directional convergence of gradient flow, and obtain as a corollary that training two-layer ReLU neural networks on orthogonally separable datasets can cause their adversarial reprogramming to fail. We support these theoretical results by experiments that demonstrate that, as long as batch normalisation layers are suitably initialised, even untrained networks with random weights are susceptible to adversarial reprogramming. This is in contrast to observations in several recent works that suggested that adversarial reprogramming is not possible for untrained networks to any degree of reliability.
Latent Neural Operator for Solving Forward and Inverse PDE Problems
Neural operators effectively solve PDE problems from data without knowing the explicit equations, which learn the map from the input sequences of observed samples to the predicted values. Most existing works build the model in the original geometric space, leading to high computational costs when the number of sample points is large. We present the Latent Neural Operator (LNO) solving PDEs in the latent space. In particular, we first propose Physics-Cross-Attention (PhCA) transforming representation from the geometric space to the latent space, then learn the operator in the latent space, and finally recover the real-world geometric space via the inverse PhCA map. Our model retains flexibility that can decode values in any position not limited to locations defined in the training set, and therefore can naturally perform interpolation and extrapolation tasks particularly useful for inverse problems. Moreover, the proposed LNO improves both prediction accuracy and computational efficiency. Experiments show that LNO reduces the GPU memory by 50%, speeds up training 1.8 times, and reaches state-of-the-art accuracy on four out of six benchmarks for forward problems and a benchmark for inverse problem.
Unpaired Image-to-Image Translation with Density Changing Regularization
Unpaired image-to-image translation aims to translate an input image to another domain such that the output image looks like an image from another domain while important semantic information are preserved. Inferring the optimal mapping with unpaired data is impossible without making any assumptions. In this paper, we make a density changing assumption where image patches of high probability density should be mapped to patches of high probability density in another domain. Then we propose an efficient way to enforce this assumption: we train the flows as density estimators and penalize the variance of density changes. Despite its simplicity, our method achieves the best performance on benchmark datasets and needs only 56 86% of training time of the existing state-of-the-art method. The training and evaluation code are avaliable at https://github.com/Mid-Push/
The staircase property: How hierarchical structure can guide deep learning
This paper identifies a structural property of data distributions that enables deep neural networks to learn hierarchically. We define the "staircase" property for functions over the Boolean hypercube, which posits that high-order Fourier coefficients are reachable from lower-order Fourier coefficients along increasing chains. We prove that functions satisfying this property can be learned in polynomial time using layerwise stochastic coordinate descent on regular neural networks - a class of network architectures and initializations that have homogeneity properties. Our analysis shows that for such staircase functions and neural networks, the gradient-based algorithm learns high-level features by greedily combining lower-level features along the depth of the network. We further back our theoretical results with experiments showing that staircase functions are learnable by more standard ResNet architectures with stochastic gradient descent. Both the theoretical and experimental results support the fact that the staircase property has a role to play in understanding the capabilities of gradient-based learning on regular networks, in contrast to general polynomial-size networks that can emulate any Statistical Query or PAC algorithm, as recently shown.
New AI tools, classic apps: Microsoft Office 2024 gives you the best of both worlds, now for 90 off
TL;DR: Give yourself a productivity boost with this lifetime license to Microsoft Office 2024 Home and Business for Mac or PC, now just 159.97 (reg. We get it, but as adults, we don't get the luxury of months off to have fun in the sun. If you're in desperate need of some motivation at work, let the tried-and-true Microsoft Office apps give you a boost. Right now, a lifetime license to Microsoft Office 2024 Home and Business for Mac or PC can be yours for just 159.97, 90 off the usual 249.99 price tag, through June 1. There's a reason Microsoft Office has stuck around for decades.
Turn your ideas into AI images with no ads and no limits
Even if you're not an AI buff, you've seen the fun people have creating AI-generated images. If you want to expand your imagination and start creating whatever comes to mind -- yes, even NSFW images -- you may want to take advantage of this lifetime subscription to Imagiyo AI Image Generator. Thanks to Imagiyo, you don't need to head back to school for graphic design or animation classes. This AI-powered platform lets you create stunning AI-generated art simply by providing a prompt. Type a few words and your preferred pixel size, then sit back and relax while Imagiyo works its magic.
Trump reverses course on Middle East tech policy, but will it be enough to counter China?
National security and military analyst Dr. Rebecca Grant joins'Fox & Friends First' to discuss President Donald Trump's historic business-focused trip to the Middle East and why a Trump-Putin meeting could be essential for peace in Ukraine. President Donald Trump secured 2 trillion worth of deals with Saudi Arabia, Qatar and the UAE during his trip to the Middle East last week in what some have argued is a move to counter China's influence in the region. While China has increasingly bolstered its commercial ties with top Middle Eastern nations who have remained steadfast in their refusal to pick sides amid growing geopolitical tension between Washington and Beijing, Trump may have taken steps to give the U.S. an edge over its chief competitor. But concern has mounted after Trump reversed a Biden-era policy โ which banned the sale of AI-capable chips to the UAE and Saudi Arabia โ that highly coveted U.S. technologies could potentially fall into the hands of Chinese companies, and in extension, the Chinese Communist Party (CCP). U.S. President Donald Trump walks with Saudi Crown Prince Mohammed Bin Salman during a welcoming ceremony in Riyadh, Saudi Arabia, May 13, 2025.
AI to monitor NYC subway safety as crime concerns rise
Fox News anchor Bret Baier has the latest on the Murdoch Children's Research Institute's partnership with the Gladstone Institutes for the "Decoding Broken Hearts" initiative on "Special Report." Imagine having a tireless guardian watching over you during your subway commute. New York City's subway system is testing artificial intelligence to boost security and reduce crime. Michael Kemper, a 33-year NYPD veteran and the chief security officer for the Metropolitan Transportation Authority (MTA), which is the largest transit agency in the United States, is leading the rollout of AI software designed to spot suspicious behavior as it happens. The MTA says this technology represents the future of subway surveillance and reassures riders that privacy concerns are being taken seriously.
Successor Feature Landmarks for Long-Horizon Goal-Conditioned Reinforcement Learning Christopher Hoang 1
Operating in the real-world often requires agents to learn about a complex environment and apply this understanding to achieve a breadth of goals. This problem, known as goal-conditioned reinforcement learning (GCRL), becomes especially challenging for long-horizon goals. Current methods have tackled this problem by augmenting goal-conditioned policies with graph-based planning algorithms. However, they struggle to scale to large, high-dimensional state spaces and assume access to exploration mechanisms for efficiently collecting training data. In this work, we introduce Successor Feature Landmarks (SFL), a framework for exploring large, high-dimensional environments so as to obtain a policy that is proficient for any goal. SFL leverages the ability of successor features (SF) to capture transition dynamics, using it to drive exploration by estimating state-novelty and to enable high-level planning by abstracting the state-space as a non-parametric landmarkbased graph. We further exploit SF to directly compute a goal-conditioned policy for inter-landmark traversal, which we use to execute plans to "frontier" landmarks at the edge of the explored state space.