hacked
Context-Aware Hierarchical Learning: A Two-Step Paradigm towards Safer LLMs
Ma, Tengyun, Yao, Jiaqi, He, Daojing, Peng, Shihao, Li, Yu, Liu, Shaohui, Tian, Zhuotao
Large Language Models (LLMs) have emerged as powerful tools for diverse applications. However, their uniform token processing paradigm introduces critical vulnerabilities in instruction handling, particularly when exposed to adversarial scenarios. In this work, we identify and propose a novel class of vulnerabilities, termed Tool-Completion Attack (TCA), which exploits function-calling mechanisms to subvert model behavior. To evaluate LLM robustness against such threats, we introduce the Tool-Completion benchmark, a comprehensive security assessment framework, which reveals that even state-of-the-art models remain susceptible to TCA, with surprisingly high attack success rates. To address these vulnerabilities, we introduce Context-Aware Hierarchical Learning (CAHL), a sophisticated mechanism that dynamically balances semantic comprehension with role-specific instruction constraints. CAHL leverages the contextual correlations between different instruction segments to establish a robust, context-aware instruction hierarchy. Extensive experiments demonstrate that CAHL significantly enhances LLM robustness against both conventional attacks and the proposed TCA, exhibiting strong generalization capabilities in zero-shot evaluations while still preserving model performance on generic tasks. Our code is available at https://github.com/S2AILab/CAHL.
- Asia > China > Heilongjiang Province > Harbin (0.04)
- Asia > China > Guangdong Province > Shenzhen (0.04)
The US Court Records System Has Been Hacked
This is the week of Black Hat and Defcon, which means a flood of news coming out of the Las Vegas security conferences. As you might expect, artificial intelligence was one popular topic--specifically, using AI chatbots to cause mischief. One team of researchers, from Tel Aviv University, created a clever attack that allowed them to take over a target's smart home devices using a "poisoned" Google Calendar invite. It's the first known attack method that used AI to impact physical devices. Another researcher used a poisoned document that included a malicious prompt to trick ChatGPT into leaking a user's private information when it's connected to a Google Drive.
- North America > United States > Nevada > Clark County > Las Vegas (0.26)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.26)
- North America > United States > Tennessee (0.06)
- Asia > North Korea (0.06)
- Information Technology > Security & Privacy (0.99)
- Government > Military (0.77)
Urgent warning: Don't type these six words or your computer could be HACKED
Cybersecurity experts warn that a new hacking campaign is targeting people who share an extremely specific set of interests. According to cybersecurity firm SOPHOS, hackers have used a sophisticated set of tools to hijack the results of one particular Google search. And the experts warn that searching for this specific six-word phrase could put you at serious risk of being hacked. However, you aren't likely to be in much danger unless you happen to live in Australia and have an interest in exotic cats. SOPHOS warns that hackers are targeting anyone who searches: 'Are Bengal Cats legal in Australia?'.
Jatmo: Prompt Injection Defense by Task-Specific Finetuning
Piet, Julien, Alrashed, Maha, Sitawarin, Chawin, Chen, Sizhe, Wei, Zeming, Sun, Elizabeth, Alomair, Basel, Wagner, David
Large Language Models (LLMs) are attracting significant research attention due to their instruction-following abilities, allowing users and developers to leverage LLMs for a variety of tasks. However, LLMs are vulnerable to prompt-injection attacks: a class of attacks that hijack the model's instruction-following abilities, changing responses to prompts to undesired, possibly malicious ones. In this work, we introduce Jatmo, a method for generating task-specific models resilient to prompt-injection attacks. Jatmo leverages the fact that LLMs can only follow instructions once they have undergone instruction tuning. It harnesses a teacher instruction-tuned model to generate a task-specific dataset, which is then used to fine-tune a base model (i.e., a non-instruction-tuned model). Jatmo only needs a task prompt and a dataset of inputs for the task: it uses the teacher model to generate outputs. For situations with no pre-existing datasets, Jatmo can use a single example, or in some cases none at all, to produce a fully synthetic dataset. Our experiments on seven tasks show that Jatmo models provide similar quality of outputs on their specific task as standard LLMs, while being resilient to prompt injections. The best attacks succeeded in less than 0.5% of cases against our models, versus 87% success rate against GPT-3.5-Turbo. We release Jatmo at https://github.com/wagner-group/prompt-injection-defense.
- North America > United States (0.14)
- Europe > United Kingdom > Scotland (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
- Information Technology > Security & Privacy (1.00)
- Government (0.93)
Expert Warns That Human Beings Are Going to Start Getting Hacked
Yuval Harari, a world-renowned social philosopher and the bestselling author of "Sapiens," has a stark warning: we need to start regulating AI, because otherwise big companies are going to be able to "hack" humans. Harari believes that the rapidly increasing sophistication of AI could lead to a population of "hacked humans," according to a report from CBS's "60 Minutes." To deal with this issue, he's calling on the world's leaders to begin regulating AI and data collection efforts by large corporations. "To hack a human being is to get to know that person better than they know themselves," he told the show. "And based on that, to increasingly manipulate you."
Your Brain Has Been Hacked
Would you like to have a chip inside your brain? One that could increase your capacity to think, feel, and handle situations? If so, you don't have to wait too much longer: Scientists have made significant breakthroughs in developing brain-computer interfaces. Would you sign up for a brain chip? This August, Elon Musk presented a new iteration of the Neuralink brain implant. The goal is to give human brains a direct interface to digital devices, helping, for instance, paralyzed humans, allowing them to control phones or computers.
- Health & Medicine > Health Care Technology (1.00)
- Health & Medicine > Therapeutic Area > Neurology (0.92)
17 Everyday Things You Didn't Know Could Be Hacked
Even news of millions of so-called smart speakers being hacked right before 2017's holiday season didn't seem to slow down sales of Amazon's Echo and Google Home. But if you have either, you should definitely take precautions against hackers. If these speakers are hacked, they could divulge sensitive information such as when you'll be out of town or any upcoming doctor's appointments, along with your credit card and bank account info, shares NBC.com. Equally alarming, if it's connected to your home security system, a hacker could simply turn it off and walk right through your front door. To keep yourself and your home safe, limit how much info you connect through these types of speakers and unplug it when you go on vacation. Don't worry--there's no risk with these 15 hilarious things you can ask Alexa to do.
Hospitals Still Use Pneumatic Tubes--and They Can Be Hacked
It's all too common to find hackable flaws in medical devices, from mammography machines and CT scanners to pacemakers and insulin pumps. But it turns out that the potential exposure extends into the walls: Researchers have found almost a dozen vulnerabilities in a popular brand of pneumatic tube delivery system that many hospitals use to to carry and distribute vital cargo like lab samples and medicine. Pneumatic tubes may seem like wonky and antiquated office tech, more suited to The Hudsucker Proxy than a modern-day health care system. Swisslog Healthcare, a prominent medical-focused pneumatic tube system maker, says that more than 2,300 hospitals in North America use its "TransLogic PTS" platform, as do 700 more elsewhere in the world. The nine vulnerabilities that researchers from the embedded device security company Armis found in Swisslog's Translogic Nexus Control Panels, though, could let a hacker take over a system, take it offline, access data, reroute deliveries, or otherwise sabotage the pneumatic network.
- Health & Medicine > Health Care Technology (1.00)
- Health & Medicine > Health Care Providers & Services (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (0.57)
If Microsoft Can Be Hacked, What About Your Company? How AI Is Transforming Cybersecurity
Microsoft recently acknowledged Russian hackers successfully cyberattacked them. If hackers can penetrate their internal systems, what are the chances your company will suffer the consequences of a future hack? What the Russians have done is very bad, but it's only an example of the cyber threats we all face. The cyber threat world is an arms race. The hackers are starting to use AI, and the only way to successfully defend against future threats is for your company to use AI as well.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.53)
What Will Happen When a Facial Recognition Firm is Hacked
Technology has become advanced with time and it will only be getting smarter. Biometric technology, specifically facial recognition, is among them that has transformed the security approach worldwide. With advances in camera technologies and the proliferation of smartphones, facial recognition is relentlessly gaining rapid momentum. However, as this technology has a tremendous impact from a cybersecurity perspective, it also has a security flaw. It raises concerns throughout its reliability and effectiveness.
- North America > United States > Nevada > Clark County > Las Vegas (0.06)
- Asia > China > Beijing > Beijing (0.06)