firecracker
Dynamic Frequency-Based Fingerprinting Attacks against Modern Sandbox Environments
Dipta, Debopriya Roy, Tiemann, Thore, Gulmezoglu, Berk, Marin, Eduard, Eisenbarth, Thomas
The cloud computing landscape has evolved significantly in recent years, embracing various sandboxes to meet the diverse demands of modern cloud applications. These sandboxes encompass container-based technologies like Docker and gVisor, microVM-based solutions like Firecracker, and security-centric sandboxes relying on Trusted Execution Environments (TEEs) such as Intel SGX and AMD SEV. However, the practice of placing multiple tenants on shared physical hardware raises security and privacy concerns, most notably side-channel attacks. In this paper, we investigate the possibility of fingerprinting containers through CPU frequency reporting sensors in Intel and AMD CPUs. One key enabler of our attack is that the current CPU frequency information can be accessed by user-space attackers. We demonstrate that Docker images exhibit a unique frequency signature, enabling the distinction of different containers with up to 84.5% accuracy even when multiple containers are running simultaneously in different cores. Additionally, we assess the effectiveness of our attack when performed against several sandboxes deployed in cloud environments, including Google's gVisor, AWS' Firecracker, and TEE-based platforms like Gramine (utilizing Intel SGX) and AMD SEV. Our empirical results show that these attacks can also be carried out successfully against all of these sandboxes in less than 40 seconds, with an accuracy of over 70% in all cases. Finally, we propose a noise injection-based countermeasure to mitigate the proposed attack on cloud environments.
- North America > United States > Iowa > Story County > Ames (0.04)
- Europe > Germany (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Workflow (0.93)
- Research Report > New Finding (0.66)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Hardware (1.00)
- Information Technology > Communications (1.00)
- (2 more...)
Hidden You Malicious Goal Into Benign Narratives: Jailbreak Large Language Models through Logic Chain Injection
Wang, Zhilong, Cao, Yebo, Liu, Peng
Large Language Models (LLMs) such as BERT [6] (Bidirectional Encoder Representations from Transformers) by Devlin et al. and GPT [11] (Generative Pre-trained Transformer) by Radford et al., have revolutionized the field of Natural Language Processing (NLP) with their exceptional capabilities, setting new standards in performance across various tasks. Due to their superb generative capability, LLMs are widely deployed as the backend for various real-world applications, referred to as LLM-Integrated Applications. For instance, Microsoft utilizes GPT-4 as the service backend for the new Bing Search [1]; OpenAI has developed various applications--such as ChatWithPDF and AskTheCode--that utilize GPT-4 for different tasks such as text processing, code interpretation, and product recommendation [2, 3]; Google deploys the search engine Bard, powered by PaLM 2. In general, to accomplish a task, an LLM-Integrated Application requires an instruction prompt, which aims to instruct the backend LLM to perform the task, and a data prompt, which is the data to be processed by the LLM in the task. The instruction prompt can be provided by a user or the LLM-Integrated Application itself; and the data prompt is often obtained from external resources such as emails and webpages on the Internet. An LLM-Integrated Application queries the backend LLM using the instruction prompt and data prompt to accomplish the task and returns the response from the LLM to the user. Recently, several types of vulnerabilities have been identified in LLMs to deceive models or mislead users. Among these, prompt injection attacks and jailbreak attacks stand out as prevalent vulnerabilities.
- North America > United States > Pennsylvania (0.04)
- North America > United States > Florida > Hillsborough County > University (0.04)
- Information Technology > Security & Privacy (0.69)
- Materials > Chemicals (0.47)
AWS launches Neo-AI, an open-source tool for tuning ML models
AWS isn't exactly known as an open-source powerhouse, but maybe change is in the air. Amazon's cloud computing unit today announced the launch of Neo-AI, a new open-source project under the Apache Software License. The new tool takes some of the technologies that the company developed and used for its SageMaker Neo machine learning service and brings them (back) to the open-source ecosystem. The main goal here is to make it easier to optimize models for deployments on multiple platforms -- and in the AWS context, that's mostly machines that will run these models at the edge. "Ordinarily, optimizing a machine learning model for multiple hardware platforms is difficult because developers need to tune models manually for each platform's hardware and software configuration," AWS's Sukwon Kim and Vin Sharma write in today's announcement.
- Information Technology (0.55)
- Semiconductors & Electronics (0.33)
Step inside the MIT lab designing new human-computer interfaces
"A collection of smart devices may not make you smarter. There seems to be a gap between what technology has to offer and what we are naturally able to do" Suranga Nanayakkara slips a black ring onto his finger and points. This ring, he explains, helps visually impaired people read by converting text into speech. Nanayakkara points at a poster on the wall more than a metre away, clicks a small button on the side of the ring, and almost instantaneously a female voice starts reading out the poster's header through the headphones he's wearing. Such optical character recognition technology, or OCR, already exists but is often locked inside clunky highlighter-style devices that are slow and cumbersome.
- Asia > Sri Lanka (0.18)
- Asia > Singapore (0.08)
- Oceania > New Zealand > North Island > Auckland Region > Auckland (0.05)