Goto

Collaborating Authors

 Security & Privacy


Heterogeneity-Aware Personalized Federated Learning for Industrial Predictive Analytics

Hu, Yuhan, Fang, Xiaolei

arXiv.org Machine Learning

Federated prognostics enable clients (e.g., companies, factories, and production lines) to collaboratively develop a failure time prediction model while keeping each client's data local and confidential. However, traditional federated models often assume homogeneity in the degradation processes across clients, an assumption that may not hold in many industrial settings. To overcome this, this paper proposes a personalized federated prognostic model designed to accommodate clients with heterogeneous degradation processes, allowing them to build tailored prognostic models. The prognostic model iteratively facilitates the underlying pairwise collaborations between clients with similar degradation patterns, which enhances the performance of personalized federated learning. To estimate parameters jointly using decentralized datasets, we develop a federated parameter estimation algorithm based on proximal gradient descent. The proposed approach addresses the limitations of existing federated prognostic models by simultaneously achieving model personalization, preserving data privacy, and providing comprehensive failure time distributions. The superiority of the proposed model is validated through extensive simulation studies and a case study using the turbofan engine degradation dataset from the NASA repository.


Mozilla says it patched 271 Firefox vulnerabilities thanks to Anthropic's Claude Mythos

Engadget

Mozilla says it patched 271 Firefox vulnerabilities thanks to Anthropic's Claude Mythos Anthropic's buzzy announcement about using AI to improve cybersecurity earlier this month was met with plenty of skepticism. However, Mozilla shared some details that support use of the company's special Claude Mythos Preview model as a way to protect critical services. Using Mythos helped Mozilla's team find and patch 271 vulnerabilities in the latest release of the Firefox browser. So far we've found no category or complexity of vulnerability that humans can find that this model can't, the foundation said. The blog post from Mozilla feels like a positive sign for Anthropic's Project Glasswing.


Mozilla Used Anthropic's Mythos to Find and Fix 271 Bugs in Firefox

WIRED

Mozilla Used Anthropic's Mythos to Find and Fix 271 Bugs in Firefox The Firefox team doesn't think emerging AI capabilities will upend cybersecurity long term, but they warn that software developers are likely in for a rocky transition. Amid a raging debate over the impact that new AI models will have on cybersecurity, Mozilla said on Tuesday that its Firefox 150 browser release this week includes protections for 271 vulnerabilities identified using early access to Anthropic's Mythos Preview . The Firefox team says that it has taken resources and discipline to adjust to the firehose of bugs that new AI tools can uncover, but that this big lift is necessary for the security of Mozilla's users, given that the capabilities will inevitably be in attackers' hands soon. Both Anthropic and OpenAI have announced new AI models in recent weeks that the companies say have advanced cybersecurity capabilities that could represent a turning point in how defenders--and, crucially, attackers--find vulnerabilities and misconfigurations in software systems. With this in mind, the companies have so far only done limited private releases of their new models, and both have also convened industry working groups meant to assess the advances and strategize.


Robot Talk Episode 149 – Robot safety and security, with Krystal Mattich

Robohub

Krystal Mattich leads global data governance, system security, and privacy compliance for Brain Corp: the world's leading autonomy platform for commercial robotics. As Senior Director of Security, Privacy, and Risk, she is the architect of the privacy-first infrastructure that powers over 40,000 BrainOS -enabled robots across retail, airports, education and logistics. Krystal played a central role in launching Brain Corp's public-facing Trust Center, reinforcing the company's commitment to data transparency, GDPR compliance, and responsible AI. Robot Talk is a weekly podcast that explores the exciting world of robotics, artificial intelligence and autonomous machines. Robot Talk is a weekly podcast that explores the exciting world of robotics, artificial intelligence and autonomous machines.


Flat-rate AI plans are broken. Blame AI agents

PCWorld

PCWorld reports that major AI providers including Anthropic, Google, OpenAI, and GitHub are adjusting flat-rate subscription plans due to increased demand from agentic AI tools. Advanced AI agents like Google Antigravity and GitHub Copilot consume significantly more computational resources than traditional AI interactions, causing users to hit usage limits more frequently. The shift toward agentic workflows is forcing providers to introduce higher-tier plans, halt new sign-ups, and transition to usage-based models, fundamentally changing AI service accessibility. Remember when a $20-a-month "Pro" or "Plus" AI plan served up more AI access than you could possibly use? Ah, those were the days.


Bing is the anti-AI search engine you should be using

PCWorld

PCWorld argues that Bing serves as a superior alternative to AI-heavy search engines by prioritizing human-authored content over automated summaries. AI search engines like Google's AI Mode often hide original sources and provide misleading information, with traffic to publishers dropping significantly.


Make stock decisions without the guesswork with this tool--now 85% off

PCWorld

When you purchase through links in our articles, we may earn a small commission. Sterling Stock Picker uses AI and data-driven tools to help you find stocks, assess risk, and build a portfolio--now available for a one-time $68.99. Investing can feel like a strange mix of research, guesswork, and hoping you didn't miss something important. Sterling Stock Picker tries to simplify things and make investing in stocks less confusing . Instead of digging through endless charts and reports, the platform brings everything into one place.


Differentially Private Conformal Prediction

Wu, Jiamei, Zhang, Ce, Cai, Zhipeng, Kong, Jingsen, Jiang, Bei, Kong, Linglong, Kong, Lingchen

arXiv.org Machine Learning

Conformal prediction (CP) has attracted broad attention as a simple and flexible framework for uncertainty quantification through prediction sets. In this work, we study how to deploy CP under differential privacy (DP) in a statistically efficient manner. We first introduce differential CP, a non-splitting conformal procedure that avoids the efficiency loss caused by data splitting and serves as a bridge between oracle CP and private conformal inference. By exploiting the stability properties of DP mechanisms, differential CP establishes a direct connection to oracle CP and inherits corresponding validity behavior. Building on this idea, we develop Differentially Private Conformal Prediction (DPCP), a fully private procedure that combines DP model training with a private quantile mechanism for calibration. We establish the end-to-end privacy guarantee of DPCP and investigate its coverage properties under additional regularity conditions. We further study the efficiency of both differential CP and DPCP under empirical risk minimization and general regression models, showing that DPCP can produce tighter prediction sets than existing private split conformal approaches under the same privacy budget. Numerical experiments on synthetic and real datasets demonstrate the practical effectiveness of the proposed methods.


This prompt trick forces AI to stop flattering you and think harder

PCWorld

When you purchase through links in our articles, we may earn a small commission. Worried your AI chatbot is just yessing you? Here's a prompt that will make it challenge its own assumptions. I wish I had nickel for every time ChatGPT, Claude, or Gemini told me I'd hit the nail on the head, stumbled onto a genius idea, or otherwise patted me on the back for a half-formed idea or ill-conceived plan. Flattery and premature congratulations are common foibles of generative AI chatbots, with some models more susceptible to being "yes-bots" than others.


A Nonparametric Adaptive EWMA Control Chart for Binary Monitoring of Multiple Stream Processes

Muritala, Faruk, Brown, Austin, Ghosh, Dhrubajyoti, Ni, Sherry

arXiv.org Machine Learning

Monitoring binomial proportions across multiple independent streams is a critical challenge in Statistical Process Control (SPC), with applications from manufacturing to cybersecurity. While EWMA charts offer sensitivity to small shifts, existing implementations rely on asymptotic variance approximations that fail during early-phase monitoring. We introduce a Cumulative Standardized Binomial EWMA (CSB-EWMA) chart that overcomes this limitation by deriving the exact time-varying variance of the EWMA statistic for binary multiple-stream data, enabling adaptive control limits that ensure statistical rigor from the first sample. Through extensive simulations, we identify optimal smoothing (λ) and limit (L) parameters to achieve target in-control average run length (ARL0) of 370 and 500. The CSB-EWMA chart demonstrates rapid shift detection across both ARL0 targets, with out-of-control average run length (ARL1) dropping to 3-7 samples for moderate shifts (δ=0.2), and exhibits exceptional robustness across different data distributions, with low ARL1 Coefficients of Variation (CV < 0.10 for small shifts) for both ARL0 = 370 and 500. This work provides practitioners with a distribution-free, sensitive, and theoretically sound tool for early change detection in binomial multiple-stream processes.