Trump reverses course on Middle East tech policy, but will it be enough to counter China?
National security and military analyst Dr. Rebecca Grant joins'Fox & Friends First' to discuss President Donald Trump's historic business-focused trip to the Middle East and why a Trump-Putin meeting could be essential for peace in Ukraine. President Donald Trump secured 2 trillion worth of deals with Saudi Arabia, Qatar and the UAE during his trip to the Middle East last week in what some have argued is a move to counter China's influence in the region. While China has increasingly bolstered its commercial ties with top Middle Eastern nations who have remained steadfast in their refusal to pick sides amid growing geopolitical tension between Washington and Beijing, Trump may have taken steps to give the U.S. an edge over its chief competitor. But concern has mounted after Trump reversed a Biden-era policy – which banned the sale of AI-capable chips to the UAE and Saudi Arabia – that highly coveted U.S. technologies could potentially fall into the hands of Chinese companies, and in extension, the Chinese Communist Party (CCP). U.S. President Donald Trump walks with Saudi Crown Prince Mohammed Bin Salman during a welcoming ceremony in Riyadh, Saudi Arabia, May 13, 2025.
AI to monitor NYC subway safety as crime concerns rise
Fox News anchor Bret Baier has the latest on the Murdoch Children's Research Institute's partnership with the Gladstone Institutes for the "Decoding Broken Hearts" initiative on "Special Report." Imagine having a tireless guardian watching over you during your subway commute. New York City's subway system is testing artificial intelligence to boost security and reduce crime. Michael Kemper, a 33-year NYPD veteran and the chief security officer for the Metropolitan Transportation Authority (MTA), which is the largest transit agency in the United States, is leading the rollout of AI software designed to spot suspicious behavior as it happens. The MTA says this technology represents the future of subway surveillance and reassures riders that privacy concerns are being taken seriously.
Successor Feature Landmarks for Long-Horizon Goal-Conditioned Reinforcement Learning Christopher Hoang 1
Operating in the real-world often requires agents to learn about a complex environment and apply this understanding to achieve a breadth of goals. This problem, known as goal-conditioned reinforcement learning (GCRL), becomes especially challenging for long-horizon goals. Current methods have tackled this problem by augmenting goal-conditioned policies with graph-based planning algorithms. However, they struggle to scale to large, high-dimensional state spaces and assume access to exploration mechanisms for efficiently collecting training data. In this work, we introduce Successor Feature Landmarks (SFL), a framework for exploring large, high-dimensional environments so as to obtain a policy that is proficient for any goal. SFL leverages the ability of successor features (SF) to capture transition dynamics, using it to drive exploration by estimating state-novelty and to enable high-level planning by abstracting the state-space as a non-parametric landmarkbased graph. We further exploit SF to directly compute a goal-conditioned policy for inter-landmark traversal, which we use to execute plans to "frontier" landmarks at the edge of the explored state space.
Generalization Error Analysis of Quantized Compressive Learning
In this paper, we consider the learning problem where the projected data is further compressed by scalar quantization, which is called quantized compressive learning. Generalization error bounds are derived for three models: nearest neighbor (NN) classifier, linear classifier and least squares regression. Besides studying finite sample setting, our asymptotic analysis shows that the inner product estimators have deep connection with NN and linear classification problem through the variance of their debiased counterparts. By analyzing the extra error term brought by quantization, our results provide useful implications to the choice of quantizers in applications involving different learning tasks. Empirical study is also conducted to validate our theoretical findings.
Large Scale Adversarial Representation Learning
Adversarially trained generative models (GANs) have recently achieved compelling image synthesis results. But despite early successes in using GANs for unsupervised representation learning, they have since been superseded by approaches based on self-supervision. In this work we show that progress in image generation quality translates to substantially improved representation learning performance. Our approach, BigBiGAN, builds upon the state-of-the-art BigGAN model, extending it to representation learning by adding an encoder and modifying the discriminator. We extensively evaluate the representation learning and generation capabilities of these BigBiGAN models, demonstrating that these generation-based models achieve the state of the art in unsupervised representation learning on ImageNet, as well as in unconditional image generation.
Inside OpenAI's Empire
OpenAI started as a non-profit dedicated to building safe A.I. Now, they're obsessed with building artificial general intelligence by any means necessary - even if they don't quite know what that is. Subscribe to Slate Plus to access ad-free listening to the whole What Next family and all your favorite Slate podcasts. Subscribe today on Apple Podcasts by clicking "Try Free" at the top of our show page. Sign up now at slate.com/whatnextplus to get access wherever you listen.
Masked Pre-training Enables Universal Zero-shot Denoiser 1 Yi Jin
In this work, we observe that model trained on vast general images via masking strategy, has been naturally embedded with their distribution knowledge, thus spontaneously attains the underlying potential for strong image denoising. Based on this observation, we propose a novel zero-shot denoising paradigm, i.e., Masked Pre-train then Iterative fill (MPI). MPI first trains model via masking and then employs pre-trained weight for high-quality zero-shot image denoising on a single noisy image. Concretely, MPI comprises two key procedures: 1) Masked Pre-training involves training model to reconstruct massive natural images with random masking for generalizable representations, gathering the potential for valid zero-shot denoising on images with varying noise degradation and even in distinct image types.