Goto

Collaborating Authors

 Information Technology


Google's new AI shopping tool just changed the way we shop online - here's why

ZDNet

In recent years, Google Search's shopping features have evolved to make Search a one-stop shop for consumers searching for specific products, deals, and retailers. Shoppers on a budget can scour Search's Shopping tab during major sale events to see which retailer offered the best deal and where. But often, consumers miss out on a product's most productive discount, paying more later because they don't want to wait again. During this year's Google I/O developer conference, Google aims to solve this problem with AI. Shopping in Google's new AI Mode integrates Gemini's capabilities into Google's existing online shopping features, allowing consumers to use conversational phrases to find the perfect product.


Differentially Private Graph Diffusion with Applications in Personalized PageRanks

Neural Information Processing Systems

Graph diffusion, which iteratively propagates real-valued substances among the graph, is used in numerous graph/network-involved applications. However, releasing diffusion vectors may reveal sensitive linking information in the data such as transaction information in financial network data. Protecting the privacy of graph data is challenging due to its interconnected nature. This work proposes a novel graph diffusion framework with edge-level differential privacy guarantees by using noisy diffusion iterates. The algorithm injects Laplace noise per diffusion iteration and adopts a degree-based thresholding function to mitigate the high sensitivity induced by low-degree nodes. Our privacy loss analysis is based on Privacy Amplification by Iteration (PABI), which to our best knowledge, is the first effort that analyzes PABI with Laplace noise and provides relevant applications. We also introduce a novel -Wasserstein distance tracking method, which tightens the analysis of privacy leakage and makes PABI practically applicable. We evaluate this framework by applying it to Personalized Pagerank computation for ranking tasks. Experiments on real-world network data demonstrate the superiority of our method under stringent privacy conditions.


Welcome to Google AI Mode! Everything is fine.

Mashable

If the AI lovefest of Google I/O 2025 were a TV show, you might be tempted to call it It's Always Sunny in Mountain View. But here's a better sitcom analogy for the event that added AI Mode to all U.S. search results, whether we want it or not. It's The Good Place, in which our late heroes are repeatedly assured that they've gone to a better world. A place where everything is fine, all is as it seems, and search quality just keeps getting better. Don't worry about ever-present and increasing AI hallucinations here in the Good Place, where the word "hallucination" isn't even used.


Resource-Aware Federated Self-Supervised Learning with Global Class Representations

Neural Information Processing Systems

Due to the heterogeneous architectures and class skew, the global representation models training in resource-adaptive federated self-supervised learning face with tricky challenges: deviated representation abilities and inconsistent representation spaces. In this work, we are the first to propose a multi-teacher knowledge distillation framework, namely FedMKD, to learn global representations with whole class knowledge from heterogeneous clients even under extreme class skew. Firstly, the adaptive knowledge integration mechanism is designed to learn better representations from all heterogeneous models with deviated representation abilities. Then the weighted combination of the self-supervised loss and the distillation loss can support the global model to encode all classes from clients into a unified space. Besides, the global knowledge anchored alignment module can make the local representation spaces close to the global spaces, which further improves the representation abilities of local ones. Finally, extensive experiments conducted on two datasets demonstrate the effectiveness of FedMKD which outperforms state-of-the-art baselines 4.78% under linear evaluation on average.


Covariance-Aware Private Mean Estimation Without Private Covariance Estimation Marco Gaboardi Department of Computer Science Department of Computer Science Boston University

Neural Information Processing Systems

Each of our estimators is based on a simple, general approach to designing differentially private mechanisms, but with novel technical steps to make the estimator private and sample-efficient. Our first estimator samples a point with approximately maximum Tukey depth using the exponential mechanism, but restricted to the set of points of large Tukey depth. Proving that this mechanism is private requires a novel analysis. Our second estimator perturbs the empirical mean of the data set with noise calibrated to the empirical covariance, without releasing the covariance itself. Its sample complexity guarantees hold more generally for subgaussian distributions, albeit with a slightly worse dependence on the privacy parameter. For both estimators, careful preprocessing of the data is required to satisfy differential privacy.


'Every person that clashed with him has left': the rise, fall and spectacular comeback of Sam Altman

The Guardian

The short-lived firing of Sam Altman, the CEO of possibly the world's most important AI company, was sensational. When he was sacked by OpenAI's board members, some of them believed the stakes could not have been higher โ€“ the future of humanity โ€“ if the organisation continued under Altman. Imagine Succession, with added apocalypse vibes. In early November 2023, after three weeks of secret calls and varying degrees of paranoia, the OpenAI board agreed: Altman had to go. After his removal, Altman's most loyal staff resigned, and others signed an open letter calling for his reinstatement.



Everything you need to know from Google I/O 2025

Mashable

From the opening AI-influenced intro video set to "You Get What You Give" by New Radicals to CEO Sundar Pichai's sign-off, Google I/O 2025 was packed with news and updates for the tech giant and its products. And when we say packed, we mean it, as this year's Google I/O clocked in at nearly two hours long. During that time, Google shared some big wins for its AI products, such as Gemini topping various categories on the LMArena leaderboard. Another example that Google seemed really proud of was the fact that Gemini completed Pokรฉmon Blue a few weeks ago. But, we know what you're really here for: Product updates and new product announcements.


Introducing Flow, Googles new AI video tool and Sora competitor

Mashable

Google's AI Era is officially officially here, and at the center of it is a new generative video model called Flow. At the Google I/O 2025 keynote event on May 20, Google unveiled a new suite of AI video tools, powered by state-of-the-art models. The offspring of media models Veo 3 and Imagen 4, Flow is Google's answer to OpenAI's Sora -- AI tools for a new era in video generation for filmmakers and creatives. However, unlike Sora, Flow comes with native audio generation baked right in. Pitched as an "AI filmmaking tool built for creatives, by creatives," Flow is the tech giant's latest attempt to demo the power of AI as a use case in reshaping the creative process.


You can sign up for Googles AI coding tool Jules right now

Mashable

Google just rolled out a product that might make coding a lot easier. Google introduced Jules, its AI coding tool, in December in Google Labs. Today, Jules is available to everyone and everywhere the Gemini model is available, without a waitlist. "Just submit a task, and Jules takes care of the rest -- fixing bugs, making updates. It integrates with GitHub and works on its own," Tulsee Doshi, the senior director and product lead for Gemini Models, said at Google I/O 2025. "Jules can tackle complex tasks in large codebases that used to take hours, like updating an older version of Node.js.