Goto

Collaborating Authors

 corruption


The Venture-Capital Populist

The Atlantic - Technology

This story appears in the June 2026 print edition. While some stories from this issue are not yet available to read online, you can explore more from the magazine . Get our editors' guide to what matters in the world, delivered to your inbox every weekday. The courtship between Silicon Valley and MAGA was consummated on June 6, 2024, in San Francisco's Pacific Heights neighborhood, on a street known as "Billionaires' Row," at the 22,000-square-foot, $45 million French-limestone mansion of a venture capitalist named David Sacks. Along with Chamath Palihapitiya, a fellow venture capitalist and a colleague on the podcast, Sacks hosted a fundraiser for Donald Trump. He knew that other technology titans were coming around to the ex-president but remained in the closet. "And I think that this event is going to break the ice on that," Sacks said on the podcast the week before the fundraiser. "And maybe it'll create a preference cascade, where all of a sudden it becomes acceptable to acknowledge the truth." Check out more from this issue and find your next story to read. A few years earlier, Sacks had described the January 6, 2021, riot at the U.S. Capitol as an "insurrection" and pronounced Trump "disqualified" from ever again holding national office. "What Trump did was absolutely outrageous, and I think it brought him to an ignominious end in American politics," he said on the podcast a few days after the event. "He will pay for it in the history books, if not in a court of law." Palihapitiya was more colloquial, calling Trump "a complete piece-of-shit fucking scumbag." These might seem like tricky positions to climb down from--but the path that leads from scathing denunciation through gradual accommodation to sycophantic embrace of Trump is a well-worn pilgrimage trail. The journey is less wearisome for self-mortifiers who never considered democracy (a word seldom spoken on the podcast) all that important in the first place.


Met investigates hundreds of officers after using Palantir AI tool

The Guardian

The Met said corruption was the most consistent offence detected, with misconduct related to'abuse of the IT system that rosters shifts by police officers for personal or financial gain'. The Met said corruption was the most consistent offence detected, with misconduct related to'abuse of the IT system that rosters shifts by police officers for personal or financial gain'. Sat 25 Apr 2026 11.34 EDTFirst published on Sat 25 Apr 2026 11.31 EDT The Metropolitan police have launched investigations into hundreds of officers after using an AI tool built by the controversial tech company Palantir to root out rogue cops. The software was deployed by the Met over the course of a week, surveilling staff members using data the force has ready access to, unearthing rule-breaking ranging from work-from-home violations to suspected corruption and even criminal allegations such as rape. The Met said as a result of the software, evidence had been found tying a small number of officers to serious cases of misconduct and criminality, resulting in the arrest of three officers for offences including abuse of authority for sexual purposes, fraud, sexual assault, misconduct in public office and misuse of police systems.


Fast Algorithms for Robust PCA via Gradient Descent

Neural Information Processing Systems

We consider the problem of Robust PCA in the fully and partially observed settings. Without corruptions, this is the well-known matrix completion problem. From a statistical standpoint this problem has been recently well-studied, and conditions on when recovery is possible (how many observations do we need, how many corruptions can we tolerate) via polynomial-time algorithms is by now understood. This paper presents and analyzes a non-convex optimization approach that greatly reduces the computational complexity of the above problems, compared to the best available algorithms. In particular, in the fully observed case, with $r$ denoting rank and $d$ dimension, we reduce the complexity from $O(r^2d^2\log(1/\epsilon))$ to $O(rd^2\log(1/\epsilon))$ -- a big savings when the rank is big. For the partially observed case, we show the complexity of our algorithm is no more than $O(r^4d\log(d)\log(1/\epsilon))$. Not only is this the best-known run-time for a provable algorithm under partial observation, but in the setting where $r$ is small compared to $d$, it also allows for near-linear-in-$d$ run-time that can be exploited in the fully-observed case as well, by simply running our algorithm on a subset of the observations.


Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise

Neural Information Processing Systems

The growing importance of massive datasets with the advent of deep learning makes robustness to label noise a critical property for classifiers to have. Sources of label noise include automatic labeling for large datasets, non-expert labeling, and label corruption by data poisoning adversaries. In the latter case, corruptions may be arbitrarily bad, even so bad that a classifier predicts the wrong labels with high confidence. To protect against such sources of noise, we leverage the fact that a small set of clean labels is often easy to procure. We demonstrate that robustness to label noise up to severe strengths can be achieved by using a set of trusted data with clean labels, and propose a loss correction that utilizes trusted examples in a data-efficient manner to mitigate the effects of label noise on deep neural network classifiers. Across vision and natural language processing tasks, we experiment with various label noises at several strengths, and show that our method significantly outperforms existing methods.