The best live Memorial Day mattress deals in 2025: Shop Nectar, Brooklyn Bedding, Purple, and more

Mashable

Just a few weeks left in the school year, warmer temperatures, and weekend barbecues on the calendar mean we've made it out of winter's hibernation. But that doesn't mean sleep should get put on the backburner. Sleep is one of life's basic pillars, and it impacts our mood, health, brain function, and much more. If you've ever had a terrible month of sleep, you know how detrimental a sleep deficit can be on pretty much every aspect of waking hours. Instead of putting the milk in the cupboard on account of a sleepy brain, prioritize sleep this summer by snagging a luxurious new mattress while it's on sale.


50 of the best Memorial Day deals and sales already live: Mattresses, headphones, outdoor furniture, and more

Mashable

Somehow, we've already reached the unofficial start of summer: the Memorial Day 2025 deals are here. Though Memorial Day isn't technically until May 26, plenty of brands kicked off their sales early. Leading the way are mattress deals, followed by home and kitchen deals. Below, we've gathered all the best deals so far ahead of Memorial Day, and will be adding to this list as more deals go live.



Exploiting Domain-Specific Features to Enhance Domain Generalization

Neural Information Processing Systems

Domain Generalization (DG) aims to train a model, from multiple observed source domains, in order to perform well on unseen target domains. To obtain the generalization capability, prior DG approaches have focused on extracting domaininvariant information across sources to generalize on target domains, while useful domain-specific information which strongly correlates with labels in individual domains and the generalization to target domains is usually ignored. In this paper, we propose meta-Domain Specific-Domain Invariant (mDSDI) - a novel theoretically sound framework that extends beyond the invariance view to further capture the usefulness of domain-specific information. Our key insight is to disentangle features in the latent space while jointly learning both domain-invariant and domainspecific features in a unified framework. The domain-specific representation is optimized through the meta-learning framework to adapt from source domains, targeting a robust generalization on unseen domains. We empirically show that mDSDI provides competitive results with state-of-the-art techniques in DG. A further ablation study with our generated dataset, Background-Colored-MNIST, confirms the hypothesis that domain-specific is essential, leading to better results when compared with only using domain-invariant.


American tennis star Danielle Collins accuses cameraman of 'wildly inappropriate' behavior

FOX News

PongBot is an artificial intelligence-powered tennis robot. American tennis player Danielle Collins had some choice words for the cameraman during her Internationaux de Strasbourg match against Emma Raducanu on Wednesday afternoon. Collins was in the middle of a changeover when she felt the cameraman's hovering was a bit too close for comfort in the middle of the third and defining set. She got off the bench and made the point clear. Danielle Collins celebrates during her match against Madison Keys in the third round of the women's singles at the 2025 Australian Open at Melbourne Park in Melbourne, Australia, on Jan. 18, 2025.


Nonlinear dynamics of localization in neural receptive fields

Neural Information Processing Systems

Localized receptive fields--neurons that are selective for certain contiguous spatiotemporal features of their input--populate early sensory regions of the mammalian brain. Unsupervised learning algorithms that optimize explicit sparsity or independence criteria replicate features of these localized receptive fields, but fail to explain directly how localization arises through learning without efficient coding, as occurs in early layers of deep neural networks and might occur in early sensory regions of biological systems. We consider an alternative model in which localized receptive fields emerge without explicit top-down efficiency constraints--a feedforward neural network trained on a data model inspired by the structure of natural images. Previous work identified the importance of non-Gaussian statistics to localization in this setting but left open questions about the mechanisms driving dynamical emergence. We address these questions by deriving the effective learning dynamics for a single nonlinear neuron, making precise how higher-order statistical properties of the input data drive emergent localization, and we demonstrate that the predictions of these effective dynamics extend to the many-neuron setting. Our analysis provides an alternative explanation for the ubiquity of localization as resulting from the nonlinear dynamics of learning in neural circuits.


A Proof of Lemma

Neural Information Processing Systems

According to [5], G has a closed graph and compact values. Furthermore, it holds that G(t) D(t) for all t R. Adopting the terminology from [5], D is conservative for the ReLU function, which implies that G is conservative for the ReLU function as well [5, Remark 3(e)]. Indeed the Clarke subdifferential is the convex hull of limits of sequences of gradients. For Lipschitz constant, we want the maximum norm element, which necessarily happens at a corner of the convex hull, therefore for our purposes it suffices to consider sequences. Since the ReLU network will be almost-everywhere differentiable, we can consider a shrinking sequence of balls around any point, and we will have gradients which are arbitrarily close to any corner of the differential at our given point. Therefore, the norms will converge to that norm, and thus it suffices to optimize over differentiable points, and what we choose at the nondifferentiability does not matter.


Semialgebraic Optimization for Lipschitz Constants of ReLU Networks

Neural Information Processing Systems

The Lipschitz constant of a network plays an important role in many applications of deep learning, such as robustness certification and Wasserstein Generative Adversarial Network. We introduce a semidefinite programming hierarchy to estimate the global and local Lipschitz constant of a multiple layer deep neural network. The novelty is to combine a polynomial lifting for ReLU functions derivatives with a weak generalization of Putinar's positivity certificate. This idea could also apply to other, nearly sparse, polynomial optimization problems in machine learning. We empirically demonstrate that our method provides a trade-off with respect to state of the art linear programming approach, and in some cases we obtain better bounds in less time.


The Download: the desert data center boom, and how to measure Earth's elevations

MIT Technology Review

In the high desert east of Reno, Nevada, construction crews are flattening the golden foothills of the Virginia Range, laying the foundations of a data center city. Google, Tract, Switch, EdgeCore, Novva, Vantage, and PowerHouse are all operating, building, or expanding huge facilities nearby. Meanwhile, Microsoft has acquired more than 225 acres of undeveloped property, and Apple is expanding its existing data center just across the Truckee River from the industrial park. The corporate race to amass computing resources to train and run artificial intelligence models and store information in the cloud has sparked a data center boom in the desert--and it's just far enough away from Nevada's communities to elude wide notice and, some fear, adequate scrutiny. This story is part of Power Hungry: AI and our energy future--our new series shining a light on the energy demands and carbon costs of the artificial intelligence revolution.


Improved Coresets and Sublinear Algorithms for Power Means in Euclidean Spaces Vincent Cohen-Addad David Saulpic Chris Schwiegelshohn

Neural Information Processing Systems

Special cases of problem include the well-known Fermat-Weber problem - or geometric median problem - where z = 1, the mean or centroid where z = 2, and the Minimum Enclosing Ball problem, where z = . We consider these problem in the big data regime. Here, we are interested in sampling as few points as possible such that we can accurately estimate m. More specifically, we consider sublinear algorithms as well as coresets for these problems. Sublinear algorithms have a random query access to the set A and the goal is to minimize the number of queries.