security and privacy
Beyond the GPU: The Strategic Role of FPGAs in the Next Wave of AI
AI acceleration has been dominated by GPUs, but the growing need for lower latency, energy efficiency, and fine-grained hardware control exposes the limits of fixed architectures. In this context, Field-Programmable Gate Arrays (FPGAs) emerge as a reconfigurable platform that allows mapping AI algorithms directly into device logic. Their ability to implement parallel pipelines for convolutions, attention mechanisms, and post-processing with deterministic timing and reduced power consumption makes them a strategic option for workloads that demand predictable performance and deep customization. Unlike CPUs and GPUs, whose architecture is immutable, an FPGA can be reconfigured in the field to adapt its physical structure to a specific model, integrate as a SoC with embedded processors, and run inference near the sensor without sending raw data to the cloud. This reduces latency and required bandwidth, improves privacy, and frees GPUs from specialized tasks in data centers. Partial reconfiguration and compilation flows from AI frameworks are shortening the path from prototype to deployment, enabling hardware--algorithm co-design.
- Asia > China > Hong Kong (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Asia > China > Shaanxi Province > Xi'an (0.04)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (0.68)
has'practical impact in a number of real-world applications, ' specifically'where security and privacy are important. '
We thank the reviewers for their thoughtful and constructive reviews. 'theoretically and experimentally grounded,' and'extremely well-written,' that it'could easily be used in practice,' and Below we respond to the major comments; we will fix the minor ones in the final version. Experiments for CIFAR-100 are in progress and we will add those and the results of Table I to the final paper. The MLP and CNN are a bit old models [...] We used MLP and CNNs since they were used in studies that we We will add the'mixeup training' results into the revised paper. We performed experiments similar to arXiv:1902.02476
MAYA: Addressing Inconsistencies in Generative Password Guessing through a Unified Benchmark
Corrias, William, De Gaspari, Fabio, Hitaj, Dorjan, Mancini, Luigi V.
Recent advances in generative models have led to their application in password guessing, with the aim of replicating the complexity, structure, and patterns of human-created passwords. Despite their potential, inconsistencies and inadequate evaluation methodologies in prior research have hindered meaningful comparisons and a comprehensive, unbiased understanding of their capabilities. This paper introduces MAYA, a unified, customizable, plug-and-play benchmarking framework designed to facilitate the systematic characterization and benchmarking of generative password-guessing models in the context of trawling attacks. Using MAYA, we conduct a comprehensive assessment of six state-of-the-art approaches, which we re-implemented and adapted to ensure standardization. Our evaluation spans eight real-world password datasets and covers an exhaustive set of advanced testing scenarios, totaling over 15,000 compute hours. Our findings indicate that these models effectively capture different aspects of human password distribution and exhibit strong generalization capabilities. However, their effectiveness varies significantly with long and complex passwords. Through our evaluation, sequential models consistently outperform other generative architectures and traditional password-guessing tools, demonstrating unique capabilities in generating accurate and complex guesses. Moreover, the diverse password distributions learned by the models enable a multi-model attack that outperforms the best individual model. By releasing MAYA, we aim to foster further research, providing the community with a new tool to consistently and reliably benchmark generative password-guessing models. Our framework is publicly available at https://github.com/williamcorrias/MAYA-Password-Benchmarking.
- Europe > Italy > Lazio > Rome (0.40)
- South America > Colombia > Bogotá D.C. > Bogotá (0.04)
- North America > United States > California > Santa Clara County > Santa Clara (0.04)
- (3 more...)
has'practical impact in a number of real-world applications, ' specifically'where security and privacy are important. '
We thank the reviewers for their thoughtful and constructive reviews. 'theoretically and experimentally grounded,' and'extremely well-written,' that it'could easily be used in practice,' and Below we respond to the major comments; we will fix the minor ones in the final version. Experiments for CIFAR-100 are in progress and we will add those and the results of Table I to the final paper. The MLP and CNN are a bit old models [...] We used MLP and CNNs since they were used in studies that we We will add the'mixeup training' results into the revised paper. We performed experiments similar to arXiv:1902.02476
Emerging Paradigms for Securing Federated Learning Systems
Abouelmagd, Amr Akmal, Hilal, Amr
Federated Learning (FL) facilitates collaborative model training while keeping raw data decentralized, making it a conduit for leveraging the power of IoT devices while maintaining privacy of the locally collected data. However, existing privacy- preserving techniques present notable hurdles. Methods such as Multi-Party Computation (MPC), Homomorphic Encryption (HE), and Differential Privacy (DP) often incur high compu- tational costs and suffer from limited scalability. This survey examines emerging approaches that hold promise for enhancing both privacy and efficiency in FL, including Trusted Execution Environments (TEEs), Physical Unclonable Functions (PUFs), Quantum Computing (QC), Chaos-Based Encryption (CBE), Neuromorphic Computing (NC), and Swarm Intelligence (SI). For each paradigm, we assess its relevance to the FL pipeline, outlining its strengths, limitations, and practical considerations. We conclude by highlighting open challenges and prospective research avenues, offering a detailed roadmap for advancing secure and scalable FL systems.
- North America > United States > Tennessee > Putnam County > Cookeville (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Asia > Middle East > Jordan (0.04)
- Overview (1.00)
- Research Report (0.84)
- Summary/Review (0.68)
No Encore: Unlearning as Opt-Out in Music Generation
Kim, Jinju, Kim, Taehan, Waheed, Abdul, Hwan, Jong, Singh, Rita
AI music generation is rapidly emerging in the creative industries, enabling intuitive music generation from textual descriptions. However, these systems pose risks in exploitation of copyrighted creations, raising ethical and legal concerns. In this paper, we present preliminary results on the first application of machine unlearning techniques from an ongoing research to prevent inadvertent usage of creative content. Particularly, we explore existing methods in machine unlearning to a pre-trained Text-to-Music (TTM) baseline and analyze their efficacy in unlearning pre-trained datasets without harming model performance. Through our experiments, we provide insights into the challenges of applying unlearning in music generation, offering a foundational analysis for future works on the application of unlearning for music generative models.
- Europe (0.14)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
- Law (1.00)
Generative Propaganda
Daepp, Madeleine I. G., Cuevas, Alejandro, Ness, Robert Osazuwa, Wang, Vickie Yu-Ping, Nayak, Bharat Kumar, Mishra, Dibyendu, Cheng, Ti-Chung, Desai, Shaily, Pal, Joyojeet
Generative propaganda is the use of generative artificial intelligence (AI) to shape public opinion. To characterize its use in real-world settings, we conducted interviews with defenders (e.g., factcheckers, journalists, officials) in Taiwan and creators (e.g., influencers, political consultants, advertisers) as well as defenders in India, centering two places characterized by high levels of online propaganda. The term "deepfakes", we find, exerts outsized discursive power in shaping defenders' expectations of misuse and, in turn, the interventions that are prioritized. To better characterize the space of generative propaganda, we develop a taxonomy that distinguishes between obvious versus hidden and promotional versus derogatory use. Deception was neither the main driver nor the main impact vector of AI's use; instead, Indian creators sought to persuade rather than to deceive, often making AI's use obvious in order to reduce legal and reputational risks, while Taiwan's defenders saw deception as a subset of broader efforts to distort the prevalence of strategic narratives online. AI was useful and used, however, in producing efficiency gains in communicating across languages and modes, and in evading human and algorithmic detection. Security researchers should reconsider threat models to clearly differentiate deepfakes from promotional and obvious uses, to complement and bolster the social factors that constrain misuse by internal actors, and to counter efficiency gains globally.
- Asia > China (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.14)
- (19 more...)
- Overview (1.00)
- Research Report > New Finding (0.93)
- Media > News (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Regional Government > Asia Government (0.67)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.37)