Goto

Collaborating Authors

 server


Heterogeneity-Aware Personalized Federated Learning for Industrial Predictive Analytics

Hu, Yuhan, Fang, Xiaolei

arXiv.org Machine Learning

Federated prognostics enable clients (e.g., companies, factories, and production lines) to collaboratively develop a failure time prediction model while keeping each client's data local and confidential. However, traditional federated models often assume homogeneity in the degradation processes across clients, an assumption that may not hold in many industrial settings. To overcome this, this paper proposes a personalized federated prognostic model designed to accommodate clients with heterogeneous degradation processes, allowing them to build tailored prognostic models. The prognostic model iteratively facilitates the underlying pairwise collaborations between clients with similar degradation patterns, which enhances the performance of personalized federated learning. To estimate parameters jointly using decentralized datasets, we develop a federated parameter estimation algorithm based on proximal gradient descent. The proposed approach addresses the limitations of existing federated prognostic models by simultaneously achieving model personalization, preserving data privacy, and providing comprehensive failure time distributions. The superiority of the proposed model is validated through extensive simulation studies and a case study using the turbofan engine degradation dataset from the NASA repository.


Tight Convergence Rates for Online Distributed Linear Estimation with Adversarial Measurements

Roy, Nibedita, Halder, Vishal, Thoppe, Gugan, Reiffers-Masson, Alexandre, Dhanakshirur, Mihir, Naman, null, Azor, Alexandre

arXiv.org Machine Learning

We study mean estimation of a random vector $X$ in a distributed parameter-server-worker setup. Worker $i$ observes samples of $a_i^\top X$, where $a_i^\top$ is the $i$th row of a known sensing matrix $A$. The key challenges are adversarial measurements and asynchrony: a fixed subset of workers may transmit corrupted measurements, and workers are activated asynchronously--only one is active at any time. In our previous work, we proposed a two-timescale $\ell_1$-minimization algorithm and established asymptotic recovery under a null-space-property-like condition on $A$. In this work, we establish tight non-asymptotic convergence rates under the same null-space-property-like condition. We also identify relaxed conditions on $A$ under which exact recovery may fail but recovery of a projected component of $\mathbb{E}[X]$ remains possible. Overall, our results provide a unified finite-time characterization of robustness, identifiability, and statistical efficiency in distributed linear estimation with adversarial workers, with implications for network tomography and related distributed sensing problems.


BVFLMSP : Bayesian Vertical Federated Learning for Multimodal Survival with Privacy

Kar, Abhilash, Saha, Basisth, Sen, Tanmay, Pradhan, Biswabrata

arXiv.org Machine Learning

Multimodal time-to-event prediction often requires integrating sensitive data distributed across multiple parties, making centralized model training impractical due to privacy constraints. At the same time, most existing multimodal survival models produce single deterministic predictions without indicating how confident the model is in its estimates, which can limit their reliability in real-world decision making. To address these challenges, we propose BVFLMSP, a Bayesian Vertical Federated Learning (VFL) framework for multimodal time-to-event analysis based on a Split Neural Network architecture. In BVFLMSP, each client independently models a specific data modality using a Bayesian neural network, while a central server aggregates intermediate representations to perform survival risk prediction. To enhance privacy, we integrate differential privacy mechanisms by perturbing client side representations before transmission, providing formal privacy guarantees against information leakage during federated training. We first evaluate our Bayesian multimodal survival model against widely used single modality survival baselines and the centralized multimodal baseline MultiSurv. Across multimodal settings, the proposed method shows consistent improvements in discrimination performance, with up to 0.02 higher C-index compared to MultiSurv. We then compare federated and centralized learning under varying privacy budgets across different modality combinations, highlighting the tradeoff between predictive performance and privacy. Experimental results show that BVFLMSP effectively includes multimodal data, improves survival prediction over existing baselines, and remains robust under strict privacy constraints while providing uncertainty estimates.


A Federated Many-to-One Hopfield model for associative Neural Networks

Alessandrelli, Andrea, Durante, Fabrizio, Ladiana, Andrea, Lepre, Andrea

arXiv.org Machine Learning

Federated learning enables collaborative training without sharing raw data, but struggles under client heterogeneity and streaming distribution shifts, where drift and novel data can impair convergence and cause forgetting. We propose a federated associative-memory framework that learns shared archetypes in heterogeneous, continual settings, where client data are independent but not necessarily balanced. Each client encodes its experience as a low-rank Hebbian operator, sent to a central server for aggregation and factorization into global archetypes. This approach preserves privacy, avoids centralized replay buffers, and is robust to small, noisy, or evolving datasets. We cast aggregation as a low-rank-plus-noise spectral inference problem, deriving theoretical thresholds for detectability and retrieval robustness. An entropy-based controller balances stability and plasticity in streaming regimes. Experiments with heterogeneous clients, drift, and novelty show improved global archetype reconstruction and associative retrieval, supporting the spectral view of federated consolidation.


Three people have been charged with illegally exporting NVIDIA GPUs to China

Engadget

The GPUs were placed in servers that were supposed to be shipped from Taiwan to companies in Southeast Asia. The US Attorney's Office for the Southern District of New York has charged three people with illegally exporting NVIDIA GPUs to China in violation of the Export Control Reform Act. NVIDIA's chips have become a critical component in the rush to train and run increasingly complex artificial intelligence models, one the US has sought to manipulate with export controls and profit-sharing schemes with NVIDIA. The three people, Yih-Shyan Wally Liaw, Ruei-Tsang Steven Chang and Ting-Wei Willy Sun, two employees and one contractor working for US IT company Super Micro Computer, allegedly circumvented export control laws via a multi-step scheme that involved creating fake orders for servers with NVIDIA chips from Southeast Asian companies, that were then secretly sent to China. The plan involved paying a logistics company to repackage the servers in Taiwan, staging dummy servers to be inspected by Super Micro Computer's compliance team and falsifying records so Liaw, Chang and Sun's employer was unaware where the servers were actually being sent.


Feds charge 3 in 2.5b scheme to smuggle us AI tech to China using dummy servers

FOX News

Federal prosecutors charged three men linked to Supermicro with allegedly smuggling $2.5 billion in U.S. AI technology to China using fake documents, dummy servers, and shell companies.


Trio charged over alleged plot to smuggle Nvidia chips from US to China

BBC News

A trio linked with a US technology supplier have been charged over a ploy to smuggle American artificial intelligence (AI) chips to China, the Department of Justice said on Thursday. The individuals allegedly conspired to sell billions of dollars' worth of technology to buyers in China by faking documents and using dummy equipment to slip past audits, according to the DOJ. The goods in question included Nvidia-made semiconductors, highly coveted AI chips which are subject to export controls. In August 2025, two Chinese nationals were also arrested and charged with illegally shipping millions of dollars' worth of Nvidia chips to China. The DOJ said in a statement on Thursday that it had arrested US-citizen Yih-Shyan Wally Liaw and Taiwanese citizen Ting-Wei Willy Sun, while Ruei-Tsang Steven Chang, a Taiwanese citizen, remains a fugitive.


STEM: A Stochastic Two-Sided Momentum Algorithm Achieving Near-Optimal Sample and Communication Complexities for Federated Learning

Neural Information Processing Systems

Federated Learning (FL) refers to the paradigm where multiple worker nodes (WNs) build a joint model by using local data. Despite extensive research, for a generic non-convex FL problem, it is not clear, how to choose the WNs' and the server's update directions, the minibatch sizes, and the local update frequency, so that the WNs use the minimum number of samples and communication rounds to achieve the desired solution. This work addresses the above question and considers a class of stochastic algorithms where the WNs perform a few local updates before communication. We show that when both the WN's and the server's directions are chosen based on certain stochastic momentum estimator, the algorithm requires $\tilde{\mathcal{O}}(\epsilon^{-3/2})$ samples and $\tilde{\mathcal{O}}(\epsilon^{-1})$ communication rounds to compute an $\epsilon$-stationary solution. To the best of our knowledge, this is the first FL algorithm that achieves such {\it near-optimal} sample and communication complexities simultaneously. Further, we show that there is a trade-off curve between local update frequencies and local minibatch sizes, on which the above sample and communication complexities can be maintained.


Communication-Efficient Distributed Learning of Discrete Distributions

Neural Information Processing Systems

We initiate a systematic investigation of distribution learning (density estimation) when the data is distributed across multiple servers. The servers must communicate with a referee and the goal is to estimate the underlying distribution with as few bits of communication as possible. We focus on non-parametric density estimation of discrete distributions with respect to the l1 and l2 norms. We provide the first non-trivial upper and lower bounds on the communication complexity of this basic estimation task in various settings of interest. Specifically, our results include the following: 1. When the unknown discrete distribution is unstructured and each server has only one sample, we show that any blackboard protocol (i.e., any protocol in which servers interact arbitrarily using public messages) that learns the distribution must essentially communicate the entire sample.


How to Set Up Your Own NAS Server for Backups and Content Streaming

WIRED

The app reads your email inbox and your meeting calendar, then gives you a short audio summary. It can help you spend less time scrolling, but of course, there are privacy drawbacks to consider.