api attack
Combing for Credentials: Active Pattern Extraction from Smart Reply
Jayaraman, Bargav, Ghosh, Esha, Chase, Melissa, Roy, Sambuddha, Dai, Wei, Evans, David
Pre-trained large language models, such as GPT\nobreakdash-2 and BERT, are often fine-tuned to achieve state-of-the-art performance on a downstream task. One natural example is the ``Smart Reply'' application where a pre-trained model is tuned to provide suggested responses for a given query message. Since the tuning data is often sensitive data such as emails or chat transcripts, it is important to understand and mitigate the risk that the model leaks its tuning data. We investigate potential information leakage vulnerabilities in a typical Smart Reply pipeline. We consider a realistic setting where the adversary can only interact with the underlying model through a front-end interface that constrains what types of queries can be sent to the model. Previous attacks do not work in these settings, but require the ability to send unconstrained queries directly to the model. Even when there are no constraints on the queries, previous attacks typically require thousands, or even millions, of queries to extract useful information, while our attacks can extract sensitive data in just a handful of queries. We introduce a new type of active extraction attack that exploits canonical patterns in text containing sensitive data. We show experimentally that it is possible for an adversary to extract sensitive user information present in the training data, even in realistic settings where all interactions with the model must go through a front-end that limits the types of queries. We explore potential mitigation strategies and demonstrate empirically how differential privacy appears to be a reasonably effective defense mechanism to such pattern extraction attacks.
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- North America > United States > Virginia (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- (3 more...)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Deepfakes, API attacks on the rise following Ukraine war
The use of deepfakes to evade security controls and compromise organisations is on the rise among cybercriminals, with researchers seeing a 13% increase in the use of deepfakes compared with last year, said a new report. Deepfakes use deep learning artificial intelligence (AI) to replace the likeness of one person with another in video and other digital media. The findings from US-based cloud computing and virtualisation firm VMware's eighth annual'Global Incident Response Threat Report', which surveyed 125 cybersecurity professionals from around the world, also revealed an uptick in the overall cybersecurity attacks since Russia's invasion of Ukraine, as stated by two-thirds (65%) of those professionals. "Cybercriminals are now incorporating deepfakes into their attack methods to evade security controls," said Rick McElroy, principal cybersecurity strategist at VMware. "Two out of three respondents in our report saw malicious deepfakes used as part of an attack, a 13% increase from last year, with email as the top delivery method. Cybercriminals have evolved beyond using synthetic video and audio simply for influence operations or disinformation campaigns. Their new goal is to use deepfake technology to compromise organisations and gain access to their environment," added McElroy.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (1.00)
Four Technologies that will Increase Cybersecurity Risk in 2019
Attackers are not just getting smarter, they are also using the most advanced technologies available, the same ones being used by security professionals – namely, artificial intelligence (AI) and machine learning (ML). Meanwhile, the widespread adoption of cloud, mobile and IoT technologies has created a sprawling IT attack surface that is getting harder to protect from cyber threats, since fixing every existing vulnerability in these infrastructures is unfeasible and impossible. Here are four ways attackers will exploit technology in new and creative ways over the next 12 months. The bias issue is in its infancy now, but will grow rapidly this year and beyond. We can expect attackers to exploit the vulnerabilities associated with it.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.42)