Goto

Collaborating Authors

 encryption


How to Organize Safely in the Age of Surveillance

WIRED

From threat modeling to encrypted collaboration apps, we've collected experts' tips and tools for safely and effectively building a group--even while being targeted and tracked by the powerful. Rarely in modern US history have so many Americans opposed the actions of the federal government with so little hope for a top-down political solution. That's left millions of people seeking a bottom-up approach to resistance: grassroots organizing. Yet as Americans assemble their own movements to protect and support immigrants, push back against the Department of Homeland Security's dangerous incursions into cities, and protest for civil rights and policy changes, they face a federal government that possesses vast surveillance powers and sweeping cooperation from the Silicon Valley companies that hold Americans' data. That means political, social, and economic organizing presents a risky dilemma. How do you bring people of all ages, backgrounds, and technical abilities into a mass movement without exposing them to monitoring and targeting by a government--and in particular Immigration and Customs Enforcement and Customs and Border Protection, agencies with paramilitary ambitions, a tendency to break the law, and more funding than some countries' militaries. Organizing safely in an age of surveillance increasingly requires not only technical security know-how, but also a tricky balance between secrecy and openness, says Eva Galperin, the director of cybersecurity at the Electronic Frontier Foundation, a nonprofit focused on digital civil liberties.



Microsoft crosses privacy line few expected

FOX News

This material may not be published, broadcast, rewritten, or redistributed. Quotes displayed in real-time or delayed by at least 15 minutes. Market data provided by Factset . Powered and implemented by FactSet Digital Solutions . Mutual Fund and ETF data provided by LSEG . Your phone shares data at night: Here's how to stop it'Everything is on the table' in Nancy Guthrie search, former FBI assistant director says Spain's Pedro Sanchez vows crackdown on social media at World Government Summit How Ring will use new'Fire Watch' tool in real time FBI director defends Georgia election probe, touts'historic' crime drop Why Trump's lawsuit against the IRS is'something you don't see every day' Inside the FBI's investigation into paid protest groups Tech expert warns social media execs sound like'drug lords' as addiction trial begins Fox News Flash top headlines are here. Check out what's clicking on FoxNews.com.


When will 'Q-Day' arrive? Scientists predict the date when quantum computing will crack all of Earth's digital encryption - with terrifying consequences

Daily Mail - Science & tech

Republican Governor rips Trump for'MURDER' in Minneapolis as GOP erupts at ICE scandal Seven dead in private jet crash as audio reveals voice said'Let there be light' seconds before tragedy at snowy Maine airport Is Angelina Jolie quitting America? Private struggles emerge... as actress weighs major lifestyle that threatens to rupture her family Inside the secret double life of a beloved neurosurgeon whose gay love triangle ended... in an execution at his $2.5M mansion Queer Eye snitch reveals exactly what was said about Karamo Brown in a hot mic moment... that's torn the cast apart Kate Hudson's Oscar nomination torched as an'abomination' amid toxic family feud over Song Sung Blue Mystery of Egypt's Giza pyramids deepens as hidden megastructure 4,000 feet below is revealed America's best and worst states to retire revealed - and why Florida is no longer the obvious winner Prince Harry and Meghan Markle's Sundance screening sparks online row: 'Sussex Squad' brand claims event failed to sell out as'lies' despite photos showing'rows of empty seats' Kristi Noem's VERY unfortunate post shortly before Trump sent Tom Homan to Minneapolis to clean up mess after she lied about protester shot dead by her DHS officers NFL's'scripted' conspiracy theory resurfaces as fans find five-month old post hinting at Super Bowl 60 matchup Forensic video analysis of Alex Pretti's final 30 seconds exposes'John Wayne gun' question that can't be ignored Victoria and David Beckham make first public appearance together since son Brooklyn's damning statement as children Cruz, Romeo and Harper turn up to support her as she becomes a Knight of the Order of Arts and Letters Kristi Noem is dealt hammer blow live on Fox News as Trump lawyer trashes claim Minneapolis victim Alex Pretti was'domestic terrorist' Lauren Sanchez turns heads in a red skirt suit as she holds hands with billionaire husband Jeff Bezos at Schiaparelli's Paris Haute Couture Fashion Week show Scientists predict the date when quantum computing will crack all of Earth's digital encryption - with terrifying consequences READ MORE: Why'Q-DAY' could upend the world as we know it As terrifying as it might sound, experts believe the world will soon face a technological crisis that threatens to fundamentally overthrow digital secrecy. Known as'Q-Day', this is the moment when quantum computers will crack open all of Earth's digital encryption. From then, any information not secured by'post-quantum' protection will be laid bare - including financial transactions and military communications . So, when will this world-shattering moment arrive?


On the Gini-impurity Preservation For Privacy Random Forests

Neural Information Processing Systems

Random forests have been one successful ensemble algorithms in machine learning. Various techniques have been utilized to preserve the privacy of random forests from anonymization, differential privacy, homomorphic encryption, etc., whereas it rarely takes into account some crucial ingredients of learning algorithm. This work presents a new encryption to preserve data's Gini impurity, which plays a crucial role during the construction of random forests. Our basic idea is to modify the structure of binary search tree to store several examples in each node, and encrypt data features by incorporating label and order information. Theoretically, we prove that our scheme preserves the minimum Gini impurity in ciphertexts without decrypting, and present the security guarantee for encryption. For random forests, we encrypt data features based on our Gini-impurity-preserving scheme, and take the homomorphic encryption scheme CKKS to encrypt data labels due to their importance and privacy. We conduct extensive experiments to show the effectiveness, efficiency and security of our proposed method.


Secure and Privacy-Preserving Federated Learning for Next-Generation Underground Mine Safety

Elmahallawy, Mohamed, Madria, Sanjay, Frimpong, Samuel

arXiv.org Artificial Intelligence

Underground mining operations depend on sensor networks to monitor critical parameters such as temperature, gas concentration, and miner movement, enabling timely hazard detection and safety decisions. However, transmitting raw sensor data to a centralized server for machine learning (ML) model training raises serious privacy and security concerns. Federated Learning (FL) offers a promising alternative by enabling decentralized model training without exposing sensitive local data. Yet, applying FL in underground mining presents unique challenges: (i) Adversaries may eavesdrop on shared model updates to launch model inversion or membership inference attacks, compromising data privacy and operational safety; (ii) Non-IID data distributions across mines and sensor noise can hinder model convergence. To address these issues, we propose FedMining--a privacy-preserving FL framework tailored for underground mining. FedMining introduces two core innovations: (1) a Decentralized Functional Encryption (DFE) scheme that keeps local models encrypted, thwarting unauthorized access and inference attacks; and (2) a balancing aggregation mechanism to mitigate data heterogeneity and enhance convergence. Evaluations on real-world mining datasets demonstrate FedMining's ability to safeguard privacy while maintaining high model accuracy and achieving rapid convergence with reduced communication and computation overhead. These advantages make FedMining both secure and practical for real-time underground safety monitoring.


CryptoTensors: A Light-Weight Large Language Model File Format for Highly-Secure Model Distribution

Zhu, Huifeng, Li, Shijie, Li, Qinfeng, Jin, Yier

arXiv.org Artificial Intelligence

To enhance the performance of large language models (LLMs) in various domain-specific applications, sensitive data such as healthcare, law, and finance are being used to privately customize or fine-tune these models. Such privately adapted LLMs are regarded as either personal privacy assets or corporate intellectual property. Therefore, protecting model weights and maintaining strict confidentiality during deployment and distribution have become critically important. However, existing model formats and deployment frameworks provide little to no built-in support for confidentiality, access control, or secure integration with trusted hardware. Current methods for securing model deployment either rely on computationally expensive cryptographic techniques or tightly controlled private infrastructure. Although these approaches can be effective in specific scenarios, they are difficult and costly for widespread deployment. In this paper, we introduce CryptoTensors, a secure and format-compatible file structure for confidential LLM distribution. Built as an extension to the widely adopted Safetensors format, CryptoTensors incorporates tensor-level encryption and embedded access control policies, while preserving critical features such as lazy loading and partial deserialization. It enables transparent decryption and automated key management, supporting flexible licensing and secure model execution with minimal overhead. We implement a proof-of-concept library, benchmark its performance across serialization and runtime scenarios, and validate its compatibility with existing inference frameworks, including Hugging Face Transformers and vLLM. Our results highlight CryptoTensors as a light-weight, efficient, and developer-friendly solution for safeguarding LLM weights in real-world and widespread deployments.


One-Shot Secure Aggregation: A Hybrid Cryptographic Protocol for Private Federated Learning in IoT

Emmaka, Imraul, Phuong, Tran Viet Xuan

arXiv.org Artificial Intelligence

Federated Learning (FL) offers a promising approach to collaboratively train machine learning models without centralizing raw data, yet its scalability is often throttled by excessive communication overhead. This challenge is magnified in Internet of Things (IoT) environments, where devices face stringent bandwidth, latency, and energy constraints. Conventional secure aggregation protocols, while essential for protecting model updates, frequently require multiple interaction rounds, large payload sizes, and per-client costs rendering them impractical for many edge deployments. In this work, we present Hyb-Agg, a lightweight and communication-efficient secure aggregation protocol that integrates Multi-Key CKKS (MK-CKKS) homomorphic encryption with Elliptic Curve Diffie-Hellman (ECDH)-based additive masking. Hyb-Agg reduces the secure aggregation process to a single, non-interactive client-to-server transmission per round, ensuring that per-client communication remains constant regardless of the number of participants. This design eliminates partial decryption exchanges, preserves strong privacy under the RLWE, CDH, and random oracle assumptions, and maintains robustness against collusion by the server and up to $N-2$ clients. We implement and evaluate Hyb-Agg on both high-performance and resource-constrained devices, including a Raspberry Pi 4, demonstrating that it delivers sub-second execution times while achieving a constant communication expansion factor of approximately 12x over plaintext size. By directly addressing the communication bottleneck, Hyb-Agg enables scalable, privacy-preserving federated learning that is practical for real-world IoT deployments.


Efficient Decoding Methods for Language Models on Encrypted Data

Avitan, Matan, Baruch, Moran, Drucker, Nir, Zimerman, Itamar, Goldberg, Yoav

arXiv.org Artificial Intelligence

Large language models (LLMs) power modern AI applications, but processing sensitive data on untrusted servers raises privacy concerns. Homomorphic encryption (HE) enables computation on encrypted data for secure inference. However, neural text generation requires decoding methods like argmax and sampling, which are non-polynomial and thus computationally expensive under encryption, creating a significant performance bottleneck. We introduce cutmax, an HE-friendly argmax algorithm that reduces ciphertext operations compared to prior methods, enabling practical greedy decoding under encryption. We also propose the first HE-compatible nucleus (top-p) sampling method, leveraging cutmax for efficient stochastic decoding with provable privacy guarantees. Both techniques are polynomial, supporting efficient inference in privacy-preserving settings. Moreover, their differentiability facilitates gradient-based sequence-level optimization as a polynomial alternative to straight-through estimators. We further provide strong theoretical guarantees for cutmax, proving its convergence via exponential amplification of the gap ratio between the maximum and runner-up elements. Evaluations on realistic LLM outputs show latency reductions of 24x-35x over baselines, advancing secure text generation.


Privacy-Preserving Federated Learning from Partial Decryption Verifiable Threshold Multi-Client Functional Encryption

Wang, Minjie, Han, Jinguang, Meng, Weizhi

arXiv.org Artificial Intelligence

In federated learning, multiple parties can cooperate to train the model without directly exchanging their own private data, but the gradient leakage problem still threatens the privacy security and model integrity. Although the existing scheme uses threshold cryptography to mitigate the inference attack, it can not guarantee the verifiability of the aggregation results, making the system vulnerable to the threat of poisoning attack. We construct a partial decryption verifiable threshold multi client function encryption scheme, and apply it to Federated learning to implement the federated learning verifiable threshold security aggregation protocol (VTSAFL). VTSAFL empowers clients to verify aggregation results, concurrently minimizing both computational and communication overhead. The size of the functional key and partial decryption results of the scheme are constant, which provides efficiency guarantee for large-scale deployment. The experimental results on MNIST dataset show that vtsafl can achieve the same accuracy as the existing scheme, while reducing the total training time by more than 40%, and reducing the communication overhead by up to 50%. This efficiency is critical for overcoming the resource constraints inherent in Internet of Things (IoT) devices.