threat model
How to Organize Safely in the Age of Surveillance
From threat modeling to encrypted collaboration apps, we've collected experts' tips and tools for safely and effectively building a group--even while being targeted and tracked by the powerful. Rarely in modern US history have so many Americans opposed the actions of the federal government with so little hope for a top-down political solution. That's left millions of people seeking a bottom-up approach to resistance: grassroots organizing. Yet as Americans assemble their own movements to protect and support immigrants, push back against the Department of Homeland Security's dangerous incursions into cities, and protest for civil rights and policy changes, they face a federal government that possesses vast surveillance powers and sweeping cooperation from the Silicon Valley companies that hold Americans' data. That means political, social, and economic organizing presents a risky dilemma. How do you bring people of all ages, backgrounds, and technical abilities into a mass movement without exposing them to monitoring and targeting by a government--and in particular Immigration and Customs Enforcement and Customs and Border Protection, agencies with paramilitary ambitions, a tendency to break the law, and more funding than some countries' militaries. Organizing safely in an age of surveillance increasingly requires not only technical security know-how, but also a tricky balance between secrecy and openness, says Eva Galperin, the director of cybersecurity at the Electronic Frontier Foundation, a nonprofit focused on digital civil liberties.
- North America > United States > California (0.34)
- Europe > Switzerland (0.14)
- North America > United States > Arizona (0.04)
- (4 more...)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Mobile (0.68)
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.14)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- North America > United States (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Europe > Germany > North Rhine-Westphalia > Cologne Region > Cologne (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Maryland > Prince George's County > College Park (0.14)
- North America > United States > California > Los Angeles County > Long Beach (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- (8 more...)
- North America > United States > District of Columbia > Washington (0.04)
- North America > Canada (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (2 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Data Science > Data Mining (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
A Broader Impact
Our work designs privacy attacks, which have the potential to cause harm. The main limitation of our work is the strong threat model under which our attacks work. All of our results on CIFAR-10 make use of fewer than 30000 trained models. We plot the effectiveness of Transfer LiRA in Figure 7. ROC curves for our student attacks are found Further qualitative examples can be found in Figure 9. Ablation of score information CIFAR-10 with duplicates are found in Figure 11. Distillation threat models, which we will consider simultaneously.
Students Parrot Their Teachers: Membership Inference on Model Distillation Matthew Jagielski
Model distillation is frequently proposed as a technique to reduce the privacy leakage of machine learning. These empirical privacy defenses rely on the intuition that distilled "student" models protect the privacy of training data, as they only interact with this data indirectly through a "teacher" model. In this work, we design membership inference attacks to systematically study the privacy provided by knowledge distillation to both the teacher and student training sets. Our new attacks show that distillation alone provides only limited privacy across a number of domains. We explain the success of our attacks on distillation by showing that membership inference attacks on a private dataset can succeed even if the target model is never queried on any actual training points, but only on inputs whose predictions are highly influenced by training data. Finally, we show that our attacks are strongest when student and teacher sets are similar, or when the attacker can poison the teacher set.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > Texas (0.05)
- Europe > Switzerland > Zürich > Zürich (0.04)
- Information Technology > Security & Privacy (1.00)
- Education (1.00)