Plotting


5 AI terms you keep hearing and what they actually mean

FOX News

Tyler Saltsman, founder and CEO of EdgeRunner AI, warns that creating artificial general intelligence could "destroy the world as we know it." Whether it's powering your phone's autocorrect or helping someone create a new recipe with a few words, artificial intelligence (AI) is everywhere right now. But if you're still nodding along when someone mentions "neural networks" or "generative AI," you're not alone. Today I am breaking down five buzzy AI terms that you've probably seen in headlines, group chats or app updates, minus the tech talk. Understanding these basics will help you talk AI with confidence, even if you're not a programmer.


Global Convergence to Local Minmax Equilibrium in Classes of Nonconvex Zero-Sum Games

Neural Information Processing Systems

We study gradient descent-ascent learning dynamics with timescale separation (ฯ„-GDA) in unconstrained continuous action zero-sum games where the minimizing player faces a nonconvex optimization problem and the maximizing player optimizes a Polyak-ลojasiewicz (Pล) or strongly-concave (SC) objective. In contrast to past work on gradient-based learning in nonconvex-Pล/SC zero-sum games, we assess convergence in relation to natural game-theoretic equilibria instead of only notions of stationarity. In pursuit of this goal, we prove that the only locally stable points of the ฯ„-GDA continuous-time limiting system correspond to strict local minmax equilibria in each class of games. For these classes of games, we exploit timescale separation to construct a potential function that when combined with the stability characterization and an asymptotic saddle avoidance result gives a global asymptotic almost-sure convergence guarantee for the discrete-time gradient descent-ascent update to a set of the strict local minmax equilibrium. Moreover, we provide convergence rates for the gradient descent-ascent dynamics with timescale separation to approximate stationary points.


David Bertoin

Neural Information Processing Systems

Deep reinforcement learning policies, despite their outstanding efficiency in simulated visual control tasks, have shown disappointing ability to generalize across disturbances in the input training images. Changes in image statistics or distracting background elements are pitfalls that prevent generalization and real-world applicability of such control policies. We elaborate on the intuition that a good visual policy should be able to identify which pixels are important for its decision, and preserve this identification of important sources of information across images. This implies that training of a policy with small generalization gap should focus on such important pixels and ignore the others. This leads to the introduction of saliency-guided Q-networks (SGQN), a generic method for visual reinforcement learning, that is compatible with any value function learning method. SGQN vastly improves the generalization capability of Soft Actor-Critic agents and outperforms existing state-of-the-art methods on the Deepmind Control Generalization benchmark, setting a new reference in terms of training efficiency, generalization gap, and policy interpretability.




Semi-Parametric Dynamic Contextual Pricing

Neural Information Processing Systems

Motivated by the application of real-time pricing in e-commerce platforms, we consider the problem of revenue-maximization in a setting where the seller can leverage contextual information describing the customer's history and the product's type to predict her valuation of the product. However, her true valuation is unobservable to the seller, only binary outcome in the form of success-failure of a transaction is observed. Unlike in usual contextual bandit settings, the optimal price/arm given a covariate in our setting is sensitive to the detailed characteristics of the residual uncertainty distribution. We develop a semi-parametric model in which the residual distribution is non-parametric and provide the first algorithm which learns both regression parameters and residual distribution with ร•(p n) regret. We empirically test a scalable implementation of our algorithm and observe good performance.


Localize, Understand, Collaborate: Semantic-Aware Dragging via Intention Reasoner

Neural Information Processing Systems

Flexible and accurate drag-based editing is a challenging task that has recently garnered significant attention. Current methods typically model this problem as automatically learning "how to drag" through point dragging and often produce one deterministic estimation, which presents two key limitations: 1) Overlooking the inherently ill-posed nature of drag-based editing, where multiple results may correspond to a given input, as illustrated in Figure 1; 2) Ignoring the constraint of image quality, which may lead to unexpected distortion. To alleviate this, we propose LucidDrag, which shifts the focus from "how to drag" to "what-then-how" paradigm. LucidDrag comprises an intention reasoner and a collaborative guidance sampling mechanism. The former infers several optimal editing strategies, identifying what content and what semantic direction to be edited. Based on the former, the latter addresses "how to drag" by collaboratively integrating existing editing guidance with the newly proposed semantic guidance and quality guidance. Specifically, semantic guidance is derived by establishing a semantic editing direction based on reasoned intentions, while quality guidance is achieved through classifier guidance using an image fidelity discriminator. Both qualitative and quantitative comparisons demonstrate the superiority of LucidDrag over previous methods.


Save over 100 on Sony XM4 headphones ahead of Memorial Day

Mashable

SAVE 120: As of May 23, Sony WH-1000XM4 headphones are on sale for 228 at Amazon. If you're looking for a seriously high-quality pair of headphones, you won't want to miss this great deal on Sony XM4s. Premium noise cancellation, stellar sound quality, and Alexa voice control, these are next level. And of May 23, you can get them for less. At Amazon, they are currently on sale for 228, saving you 120 on list price.


Forget Cocomelon--this kids' app won't rot their brains

Popular Science

If your child loves their tablet, but you struggle with finding appropriate games, try Pok Pok, a learning app for kids aged 2-8 that doesn't feel like learning. It features a collection of calming, open-ended digital toys that help children explore STEM, problem-solving, creativity, and more without ads, in-app purchases, or overstimulation. Built by parents in collaboration with early childhood experts, Pok Pok offers a Montessori-inspired experience that supports healthy screen time and lifelong learning. Kids using Pok Pok build foundational skills in STEM, problem-solving, language, numbers, cause and effect, and emotional development. Each game is open-ended, so there's no "winning" or "losing."