Goto

Collaborating Authors

 ssa


John Solly Is the DOGE Operative Accused of Planning to Take Social Security Data to His New Job

WIRED

A whistleblower complaint alleges John Solly claimed to have stored highly sensitive Social Security data on a thumb drive. Solly and Leidos, his current employer, strongly deny the allegations. John Solly, a software engineer and former member of the so-called Department of Government Efficiency (DOGE), is the DOGE operative reportedly accused in a whistleblower complaint of telling colleagues that he stored sensitive Social Security Administration (SSA) data on a thumb drive and wanted to share the information with his new employer, multiple sources tell WIRED. Since October, according to a copy of his résumé, Solly has worked as the chief technology officer for the health IT division of a government contractor called Leidos, which has already received millions in SSA contracts and could receive up to $1.5 billion in contracts with SSA based on a five-year deal it signed in 2023. Solly's personal website and LinkedIn have been taken offline as of this week.



Social Security Workers Are Being Told to Hand Over Appointment Details to ICE

WIRED

The recent request goes against decades of precedent and puts noncitizens at further risk of immigration enforcement actions. Workers at the Social Security Administration have been told to share information about in-person appointments with agents of Immigration and Customs Enforcement, WIRED has learned. "If ICE comes in and asks if someone has an upcoming appointment, we will let them know the date and time," an employee with direct knowledge of the directive says. They spoke on the condition of anonymity for fear of retaliation. While the majority of appointments with SSA take place over the phone, some appointments still happen in person.



Interpretable Neural Approximation of Stochastic Reaction Dynamics with Guaranteed Reliability

Badolle, Quentin, Theuer, Arthur, Fang, Zhou, Gupta, Ankit, Khammash, Mustafa

arXiv.org Machine Learning

Stochastic Reaction Networks (SRNs) are a fundamental modeling framework for systems ranging from chemical kinetics and epidemiology to ecological and synthetic biological processes. A central computational challenge is the estimation of expected outputs across initial conditions and times, a task that is rarely solvable analytically and becomes computationally prohibitive with current methods such as Finite State Projection or the Stochastic Simulation Algorithm. Existing deep learning approaches offer empirical scalability, but provide neither interpretability nor reliability guarantees, limiting their use in scientific analysis and in applications where model outputs inform real-world decisions. Here we introduce DeepSKA, a neural framework that jointly achieves interpretability, guaranteed reliability, and substantial computational gains. DeepSKA yields mathematically transparent representations that generalise across states, times, and output functions, and it integrates this structure with a small number of stochastic simulations to produce unbiased, provably convergent, and dramatically lower-variance estimates than classical Monte Carlo. We demonstrate these capabilities across nine SRNs, including nonlinear and non-mass-action models with up to ten species, where DeepSKA delivers accurate predictions and orders-of-magnitude efficiency improvements. This interpretable and reliable neural framework offers a principled foundation for developing analogous methods for other Markovian systems, including stochastic differential equations.


Social Security Data Is Openly Being Shared With DHS to Target Immigrants

WIRED

For months, the Social Security Administration was quietly sharing sensitive data about immigrants with DHS. Last week, the Social Security Administration (SSA) quietly updated a public notice to reveal that the agency would be sharing "citizenship and immigration information" with the Department of Homeland Security (DHS). This data sharing was already happening: WIRED reported in April that the Trump administration had already started pooling sensitive data from across the government for the purpose of immigration enforcement. This public notice issued by SSA makes that official, months after the fact. The notice is known as a system of record notice (SORN), a document that outlines how an agency will share the data it has, with whom, and for what purpose. This notice is required under the Privacy Act of 1974.


Feedback Alignment Meets Low-Rank Manifolds: A Structured Recipe for Local Learning

Roy, Arani, Apolinario, Marco P., Biswas, Shristi Das, Roy, Kaushik

arXiv.org Artificial Intelligence

Training deep neural networks (DNNs) with backpropagation (BP) achieves state-of-the-art accuracy but requires global error propagation and full parameterization, leading to substantial memory and computational overhead. Direct Feedback Alignment (DFA) enables local, parallelizable updates with lower memory requirements but is limited by unstructured feedback and poor scalability in deeper architectures, specially convolutional neural networks. To address these limitations, we propose a structured local learning framework that operates directly on low-rank manifolds defined by the Singular Value Decomposition (SVD) of weight matrices. Each layer is trained in its decomposed form, with updates applied to the SVD components using a composite loss that integrates cross-entropy, subspace alignment, and orthogonality regularization. Feedback matrices are constructed to match the SVD structure, ensuring consistent alignment between forward and feedback pathways. Our method reduces the number of trainable parameters relative to the original DFA model, without relying on pruning or post hoc compression. Experiments on CIFAR-10, CIFAR-100, and ImageNet show that our method achieves accuracy comparable to that of BP. Ablation studies confirm the importance of each loss term in the low-rank setting. These results establish local learning on low-rank manifolds as a principled and scalable alternative to full-rank gradient-based training.


Learning to Reason Across Parallel Samples for LLM Reasoning

Qi, Jianing, Ye, Xi, Tang, Hao, Zhu, Zhigang, Choi, Eunsol

arXiv.org Artificial Intelligence

Scaling test-time compute brings substantial performance gains for large language models (LLMs). By sampling multiple answers and heuristically aggregate their answers (e.g., either through majority voting or using verifiers to rank the answers), one can achieve consistent performance gains in math domains. In this paper, we propose a new way to leverage such multiple sample set. We train a compact LLM, called Sample Set Aggregator (SSA), that takes a concatenated sequence of multiple samples and output the final answer, optimizing it for the answer accuracy with reinforcement learning. Experiments on five reasoning datasets demonstrate both the efficacy and efficiency of SSA. Notably, SSA improves over naive majority voting by 8% pass@5 on MATH. Furthermore, our 3B SSA surpasses model-based re-ranking with a much larger 72B process reward model. Our analysis also shows promising generalization ability of SSA, across sample set sizes, base model families and scales, and tasks. By separating LLMs to generate answers and LLMs to analyze and aggregate sampled answers, our approach can work with the outputs from premier black box models easily and efficiently.