Goto

Collaborating Authors

 protection


How to Organize Safely in the Age of Surveillance

WIRED

From threat modeling to encrypted collaboration apps, we've collected experts' tips and tools for safely and effectively building a group--even while being targeted and tracked by the powerful. Rarely in modern US history have so many Americans opposed the actions of the federal government with so little hope for a top-down political solution. That's left millions of people seeking a bottom-up approach to resistance: grassroots organizing. Yet as Americans assemble their own movements to protect and support immigrants, push back against the Department of Homeland Security's dangerous incursions into cities, and protest for civil rights and policy changes, they face a federal government that possesses vast surveillance powers and sweeping cooperation from the Silicon Valley companies that hold Americans' data. That means political, social, and economic organizing presents a risky dilemma. How do you bring people of all ages, backgrounds, and technical abilities into a mass movement without exposing them to monitoring and targeting by a government--and in particular Immigration and Customs Enforcement and Customs and Border Protection, agencies with paramilitary ambitions, a tendency to break the law, and more funding than some countries' militaries. Organizing safely in an age of surveillance increasingly requires not only technical security know-how, but also a tricky balance between secrecy and openness, says Eva Galperin, the director of cybersecurity at the Electronic Frontier Foundation, a nonprofit focused on digital civil liberties.


Meta and Other Tech Companies Ban OpenClaw Over Cybersecurity Concerns

WIRED

Security experts have urged people to be cautious with the viral agentic AI tool, known for being highly capable but also wildly unpredictable. Last month, Jason Grad issued a late-night warning to the 20 employees at his tech startup. "You've likely seen Clawdbot trending on X/LinkedIn. While cool, it is currently unvetted and high-risk for our environment, he wrote in a Slack message with a red siren emoji. "Please keep Clawdbot off all company hardware and away from work-linked accounts." Grad isn't the only tech executive who has raised concerns to staff about the experimental agentic AI tool, which was briefly known as MoltBot and is now named OpenClaw. A Meta executive says he recently told his team to keep OpenClaw off their regular work laptops or risk losing their jobs. The executive told reporters he believes the software is unpredictable and could lead to a privacy breach if used in otherwise secure environments. He spoke on the condition of anonymity to speak frankly.



Google's AI Overviews Can Scam You. Here's How to Stay Safe

WIRED

Beyond mistakes or nonsense, deliberately bad information being injected into AI search summaries is leading people down potentially harmful paths. These days, rather than showing you the traditional list of links when you run a search query, Google is intent on throwing up AI Overviews instead: synthesized summaries of information scraped off the web, with some word-prediction magic added, and packaged together in a way to sound as accurate and reliable as possible. We've written before about some of the problems with these AI Overviews, which regularly contain mistakes or nonsense, and of course rip off the work of the human writers who actually know the answers to the questions you're putting into Google. There's another problem though--these AI answers can actually be dangerous. As with every other new technology through history, scams are now making their way into AI Overviews as well, apparently injecting Google's AI answers with fraudulent phone numbers that you shouldn't trust.



Victims urge tougher action on deepfake abuse as new law comes into force

The Guardian

Campaigners from Stop Image-Based Abuse delivered a petition to Downing Street calling for greater protection against deepfake image abuse. Campaigners from Stop Image-Based Abuse delivered a petition to Downing Street calling for greater protection against deepfake image abuse. Victims of deepfake image abuse have called for stronger protection against AI-generated explicit images, as the law criminalising the creation of non-consensual intimate images comes into effect. Campaigners from Stop Image-Based Abuse delivered a petition to Downing Street with more than 73,000 signatures, urging the government to introduce civil routes to justice such as takedown orders for abusive imagery on platforms and devices. "Today's a really momentous day," said Jodie, a victim of deepfake abuse who uses a pseudonym.


South Korea's 'world-first' AI laws face pushback amid bid to become leading tech power

The Guardian

South Korea has launched what it calls'world-first' laws aimed at regulating artificial intelligence. South Korea has launched what it calls'world-first' laws aimed at regulating artificial intelligence. South Korea's'world-first' AI laws face pushback amid bid to become leading tech power The laws have been criticised by tech startups, which say they go too far, and civil society groups, which say they don't go far enough S outh Korea has embarked on a foray into the regulation of AI, launching what has been billed as the most comprehensive set of laws anywhere in the world, that could prove a model for other countries, but the new legislation has already encountered pushback. The laws, which will force companies to label AI-generated content, have been criticised by local tech startups, which say they go too far, and civil society groups, which say they don't go far enough. The AI basic act, which took effect on Thursday last week, comes amid growing global unease over artificially created media and automated decision-making, as governments struggle to keep pace with rapidly advancing technologies.



Microsoft Has a Plan to Keep Its Data Centers From Raising Your Electric Bill

WIRED

In response to a growing backlash, Microsoft said it would take steps to ensure that data centers don't raise utility bills in surrounding areas and address other public concerns. A Microsoft data center in Aldie, Virginia.Photograph: Bloomberg/Getty Images Microsoft said on Tuesday that it would be taking a series of steps toward becoming a "good neighbor" in communities where it is building data centers--including promising to ask public utilities to set higher electricity rates for data centers. Speaking onstage at an event in Great Falls, Virginia, Microsoft vice chair and president Brad Smith directly referenced a growing national pushback to data centers, describing it as creating "a moment in time when we need to listen, and we need to address these concerns head-on." "When I visit communities around the country, people have questions--pointed questions. They even have concerns," Smith said, as a slide showed headlines from various news outlets about opposition to data centers.


Bounding training data reconstruction in DP-SGD

Neural Information Processing Systems

Differentially private training offers a protection which is usually interpreted as a guarantee against membership inference attacks. By proxy, this guarantee extends to other threats like reconstruction attacks attempting to extract complete training examples. Recent works provide evidence that if one does not need to protect against membership attacks but instead only wants to protect against a training data reconstruction, then utility of private models can be improved because less noise is required to protect against these more ambitious attacks. We investigate this question further in the context of DP-SGD, a standard algorithm for private deep learning, and provide an upper bound on the success of any reconstruction attack against DP-SGD together with an attack that empirically matches the predictions of our bound. Together, these two results open the door to fine-grained investigations on how to set the privacy parameters of DP-SGD in practice to protect against reconstruction attacks. Finally, we use our methods to demonstrate that different settings of the DP-SGD parameters leading to same DP guarantees can results in significantly different success rates for reconstruction, indicating that the DP guarantee alone might not be a good proxy for controlling the protection against reconstruction attacks.