Goto

Collaborating Authors

Qualcomms 2025 Computex Highlights: Everything Announced in 20 Minutes

Mashable

Qualcomm's 2025 Computex Highlights: Everything Announced in 20 Minutes Mashable Tech Science Life Social Good Entertainment Deals Shopping Games Search Cancel * * Search Result Tech Apps & Software Artificial Intelligence Cybersecurity Cryptocurrency Mobile Smart Home Social Media Tech Industry Transportation All Tech Science Space Climate Change Environment All Science Life Digital Culture Family & Parenting Health & Wellness Sex, Dating & Relationships Sleep Careers Mental Health All Life Social Good Activism Gender LGBTQ Racial Justice Sustainability Politics All Social Good Entertainment Games Movies Podcasts TV Shows Watch Guides All Entertainment SHOP THE BEST Laptops Budget Laptops Dating Apps Sexting Apps Hookup Apps VPNs Robot Vaccuums Robot Vaccum & Mop Headphones Speakers Kindles Gift Guides Mashable Choice Mashable Selects All Sex, Dating & Relationships All Laptops All Headphones All Robot Vacuums All VPN All Shopping Games Product Reviews Adult Friend Finder Bumble Premium Tinder Platinum Kindle Paperwhite PS5 vs PS5 Slim All Reviews All Shopping Deals Newsletters VIDEOS Mashable Shows All Videos Home Tech Watch all the highlights and reveals from Qualcomm's press conference at Computex 2025 in Taipei, Taiwan. Latest Videos Android XR Glasses Unveiled at Google I/O 2025 Watch Android XR Glasses in action at Google I/O 1 hour ago By Mashable Video'Caught Stealing' trailer sees Zoë Kravitz and Austin Butler's cat-sitting gone awry Darren Aronofsky's swaggering new film looks like a rollicking time. Loading... Subscribe These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16 and agree to ourTerms of Use and Privacy Policy. See you at your inbox!


Amid technical glitches, California's e-bike incentive program promises to be ready for new applicants

Los Angeles Times

A surge of applicants vying for a chance to be chosen for a voucher worth up to 2,000 for the California E-Bike Incentive Program triggered an error in the program's website, blocking everyone from applying. Officials say they've fixed the glitch for the next round of applications next week. The California E-Bike Incentive Program, launched by the California Air Resources Board, was established to help lower cost barriers to alternative methods of transportation such as e-bikes, with the goal of getting cars off the road and reduce greenhouse gas emissions. Eligible residents must be 18 years or older with an annual household income less than 300% of the Federal Poverty Level. The vouchers can be used toward the purchase of an electric bike.


A Intervention stable sets plausible causal predictors and informative interventions

Neural Information Processing Systems

A.1 Intervention stable sets A set of predictors S is an intervention stable set if it d-separates the response from all interventions, i.e. if the d-separation statement I An example follows: Example A.1. A.2 Stable sets vs. plausible causal predictors While S Example A.2. Take the following SCM, In the example, this only happens when we set the weights, means and variances to very particular values. However, it is not a necessary condition, as is shown in the following example. To the best of our knowledge it is not clear when situations like the above arise, or how they can be detected from the accepted sets. Therefore, as a first approach we consider direct interventions on the parents as "maximally informative", and the goal of the proposed policies is to pick such interventions. Here we present a slightly adapted version of Invariant Causal Prediction [27].


Everything Announced at AMDs 2025 Computex Keynote in 19 Minutes

Mashable

Everything Announced at AMD's 2025 Computex Keynote in 19 Minutes Mashable Tech Science Life Social Good Entertainment Deals Shopping Games Search Cancel * * Search Result Tech Apps & Software Artificial Intelligence Cybersecurity Cryptocurrency Mobile Smart Home Social Media Tech Industry Transportation All Tech Science Space Climate Change Environment All Science Life Digital Culture Family & Parenting Health & Wellness Sex, Dating & Relationships Sleep Careers Mental Health All Life Social Good Activism Gender LGBTQ Racial Justice Sustainability Politics All Social Good Entertainment Games Movies Podcasts TV Shows Watch Guides All Entertainment SHOP THE BEST Laptops Budget Laptops Dating Apps Sexting Apps Hookup Apps VPNs Robot Vaccuums Robot Vaccum & Mop Headphones Speakers Kindles Gift Guides Mashable Choice Mashable Selects All Sex, Dating & Relationships All Laptops All Headphones All Robot Vacuums All VPN All Shopping Games Product Reviews Adult Friend Finder Bumble Premium Tinder Platinum Kindle Paperwhite PS5 vs PS5 Slim All Reviews All Shopping Deals Newsletters VIDEOS Mashable Shows All Videos Home Tech Everything Announced at AMD's 2025 Computex Keynote in 19 Minutes Watch highlights from AMD's Computex press conference. Latest Videos Android XR Glasses Unveiled at Google I/O 2025 Watch Android XR Glasses in action at Google I/O 1 hour ago ByMashable Video'Caught Stealing' trailer sees Zoë Kravitz and Austin Butler's cat-sitting gone awry Darren Aronofsky's swaggering new film looks like a rollicking time. Loading... Subscribe These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16 and agree to ourTerms of Use and Privacy Policy. See you at your inbox!


Joint inference and input optimization in equilibrium networks Shaojie Bai Carnegie Mellon University Carnegie Mellon University J. Zico Kolter

Neural Information Processing Systems

Many tasks in deep learning involve optimizing over the inputs to a network to minimize or maximize some objective; examples include optimization over latent spaces in a generative model to match a target image, or adversarially perturbing an input to worsen classifier performance. Performing such optimization, however, is traditionally quite costly, as it involves a complete forward and backward pass through the network for each gradient step. In a separate line of work, a recent thread of research has developed the deep equilibrium (DEQ) model, a class of models that foregoes traditional network depth and instead computes the output of a network by finding the fixed point of a single nonlinear layer. In this paper, we show that there is a natural synergy between these two settings. Although, naively using DEQs for these optimization problems is expensive (owing to the time needed to compute a fixed point for each gradient step), we can leverage the fact that gradientbased optimization can itself be cast as a fixed point iteration to substantially improve the overall speed. That is, we simultaneously both solve for the DEQ fixed point and optimize over network inputs, all within a single "augmented" DEQ model that jointly encodes both the original network and the optimization process. Indeed, the procedure is fast enough that it allows us to efficiently train DEQ models for tasks traditionally relying on an "inner" optimization loop. We demonstrate this strategy on various tasks such as training generative models while optimizing over latent codes, training models for inverse problems like denoising and inpainting, adversarial training and gradient based meta-learning.


Tikhonov Regularization is Optimal Transport Robust under Martingale Constraints

Neural Information Processing Systems

Distributionally robust optimization has been shown to offer a principled way to regularize learning models. In this paper, we find that Tikhonov regularization is distributionally robust in an optimal transport sense (i.e., if an adversary chooses distributions in a suitable optimal transport neighborhood of the empirical measure), provided that suitable martingale constraints are also imposed. Further, we introduce a relaxation of the martingale constraints which not only provides a unified viewpoint to a class of existing robust methods but also leads to new regularization tools. To realize these novel tools, tractable computational algorithms are proposed. As a byproduct, the strong duality theorem proved in this paper can be potentially applied to other problems of independent interest.


on all the raised issues, and how we will address them, which will certainly improve our work

Neural Information Processing Systems

We thank all the Reviewers for their feedback and their service to the community. We summarize it in the following paragraphs. In particular, taking localization to mean "analysis of the variance-constrained star-convex hull", function Instead taking localization to mean "constrain by raw-variance," the above issue is solved, but now the We intend to fully explore this connection in future work.


All Politics is Local: Redistricting via Local Fairness

Neural Information Processing Systems

In this paper, we propose to use the concept of local fairness for auditing and ranking redistricting plans. Given a redistricting plan, a deviating group is a population-balanced contiguous region in which a majority of individuals are of the same interest and in the minority of their respective districts; such a set of individuals have a justified complaint with how the redistricting plan was drawn. A redistricting plan with no deviating groups is called locally fair. We show that the problem of auditing a given plan for local fairness is NP-complete. We present an MCMC approach for auditing as well as ranking redistricting plans. We also present a dynamic programming based algorithm for the auditing problem that we use to demonstrate the efficacy of our MCMC approach. Using these tools, we test local fairness on real-world election data, showing that it is indeed possible to find plans that are almost or exactly locally fair. Further, we show that such plans can be generated while sacrificing very little in terms of compactness and existing fairness measures such as competitiveness of the districts or seat shares of the plans.


Efficient Streaming Algorithms for Graphlet Sampling Marco Bressan Cispa Helmholtz Center for Information Security Department of Computer Science Saarland University

Neural Information Processing Systems

Given a graph G and a positive integer k, the Graphlet Sampling problem asks to sample a connected induced k-vertex subgraph of G uniformly at random. Graphlet sampling enhances machine learning applications by transforming graph structures into feature vectors for tasks such as graph classification and subgraph identification, boosting neural network performance, and supporting clustered federated learning by capturing local structures and relationships.


Sharper Convergence Guarantees for Asynchronous SGD for Distributed and Federated Learning

Neural Information Processing Systems

We study the asynchronous stochastic gradient descent algorithm for distributed training over n workers which have varying computation and communication frequency over time. In this algorithm, workers compute stochastic gradients in parallel at their own pace and return those to the server without any synchronization.