cybersecurity expert
Chatbots Are Becoming Really, Really Good Criminals
Cybersecurity was already a nightmare. Listen to more stories on the Noa app. Earlier this fall, a team of security experts at the AI company Anthropic uncovered an elaborate cyber-espionage scheme. Hackers--strongly suspected by Anthropic to be working on behalf of the Chinese government--targeted government agencies and large corporations around the world. And it appears that they used Anthropic's own AI product, Claude Code, to do most of the work.
- Asia > China (0.47)
- North America > United States (0.05)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.92)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.68)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.31)
A dangerous tipping point? AI hacking claims divide cybersecurity experts
AI startup Anthropic's recent announcement that it detected the world's first artificial intelligence-led hacking campaign has prompted a multitude of responses from cybersecurity experts. In a report on Friday, Anthropic said its assistant Claude Code was manipulated to carry out 80-90 percent of a "large-scale" and "highly sophisticated" cyberattack, with human intervention required "only sporadically". Anthropic, the creator of the popular Claude chatbot, said the attack aimed to infiltrate government agencies, financial institutions, tech firms and chemical manufacturing companies, though the operation was only successful in a small number of cases. The San Francisco-based company, which attributed the attack to Chinese state-sponsored hackers, did not specify how it had uncovered the operation, nor identify the "roughly" 30 entities that it said had been targeted. Roman V Yampolskiy, an AI and cybersecurity expert at the University of Louisville, said there was no doubt that AI-assisted hacking posed a serious threat, though it was difficult to verify the precise details of Anthropic's account.
- North America > United States > California > San Francisco County > San Francisco (0.25)
- Asia > China (0.06)
- South America (0.05)
- (5 more...)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (1.00)
A Large Language Model-Supported Threat Modeling Framework for Transportation Cyber-Physical Systems
Salek, M Sabbir, Chowdhury, Mashrur, Munir, Muhaimin Bin, Cai, Yuchen, Hasan, Mohammad Imtiaz, Tine, Jean-Michel, Khan, Latifur, Rahman, Mizanur
Existing threat modeling frameworks related to transportation cyber-physical systems (CPS) are often narrow in scope, labor-intensive, and require substantial cybersecurity expertise. To this end, we introduce the Transportation Cybersecurity and Resiliency Threat Modeling Framework (TraCR-TMF), a large language model (LLM)-based threat modeling framework for transportation CPS that requires limited cybersecurity expert intervention. TraCR-TMF identifies threats, potential attack techniques, and relevant countermeasures for transportation CPS. Three LLM-based approaches support these identifications: (i) a retrieval-augmented generation approach requiring no cybersecurity expert intervention, (ii) an in-context learning approach with low expert intervention, and (iii) a supervised fine-tuning approach with moderate expert intervention. TraCR-TMF offers LLM-based attack path identification for critical assets based on vulnerabilities across transportation CPS entities. Additionally, it incorporates the Common Vulnerability Scoring System (CVSS) scores of known exploited vulnerabilities to prioritize threat mitigations. The framework was evaluated through two cases. First, the framework identified relevant attack techniques for various transportation CPS applications, 73% of which were validated by cybersecurity experts as correct. Second, the framework was used to identify attack paths for a target asset in a real-world cyberattack incident. TraCR-TMF successfully predicted exploitations, like lateral movement of adversaries, data exfiltration, and data encryption for ransomware, as reported in the incident. These findings show the efficacy of TraCR-TMF in transportation CPS threat modeling, while reducing the need for extensive involvement of cybersecurity experts. To facilitate real-world adoptions, all our codes are shared via an open-source repository.
- North America > United States > Alabama > Tuscaloosa County > Tuscaloosa (0.14)
- North America > United States > Massachusetts > Hampshire County > Amherst (0.14)
- North America > United States > California (0.14)
- (10 more...)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military > Cyberwarfare (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Urgent warning to all 1.8b Gmail users over 'new wave of threats' stealing accounts... Do this NOW
A new type of email attack is quietly targeting 1.8 billion Gmail users without them ever noticing. Hackers are using Google Gemini, the AI built-in tool in Gmail and Workspace, to trick users into handing over their credentials. Cybersecurity experts found that bad actors are sending emails with hidden instructions that prompt Gemini to generate fake phishing warnings, tricking users into sharing their account password or visiting malicious sites. These emails are crafted to appear urgent and sometimes from a business. By setting the font size to zero and the text color to white, attackers can insert prompts invisible to users but actionable by Gemini.
Urgent warning as 1.5 MILLION private photos are leaked from BDSM dating apps - so, have your sexy snaps been exposed?
Cybersecurity researchers have issued an urgent warning as almost 1.5 million private photos from dating apps are exposed. Affected apps include the kink dating sites BDSM People and CHICA, as well as LGBT dating services PINK, BRISH, and TRANSLOVE - all of which were developed by M.A.D Mobile. The leaked files include photos used for verification, photos removed by app moderators, and photos sent in direct messages between users - many of which were explicit. These sensitive snaps were being stored online without password protection, meaning anyone with the link could view and download them. Researchers from Cybernews, who discovered the vulnerability, say this easily exploited security flaw put up to 900,000 users at risk of further hacks or extortion.
CVE-LLM : Ontology-Assisted Automatic Vulnerability Evaluation Using Large Language Models
Ghosh, Rikhiya, von Stockhausen, Hans-Martin, Schmitt, Martin, Vasile, George Marica, Karn, Sanjeev Kumar, Farri, Oladimeji
The National Vulnerability Database (NVD) publishes over a thousand new vulnerabilities monthly, with a projected 25 percent increase in 2024, highlighting the crucial need for rapid vulnerability identification to mitigate cybersecurity attacks and save costs and resources. In this work, we propose using large language models (LLMs) to learn vulnerability evaluation from historical assessments of medical device vulnerabilities in a single manufacturer's portfolio. We highlight the effectiveness and challenges of using LLMs for automatic vulnerability evaluation and introduce a method to enrich historical data with cybersecurity ontologies, enabling the system to understand new vulnerabilities without retraining the LLM. Our LLM system integrates with the in-house application - Cybersecurity Management System (CSMS) - to help Siemens Healthineers (SHS) product cybersecurity experts efficiently assess the vulnerabilities in our products.
- South America > Peru (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > Romania (0.04)
- Europe > Germany (0.04)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- Government > Military > Cyberwarfare (1.00)
- Government > Regional Government > North America Government > United States Government (0.46)
Warning to all 1.8bn Gmail users over 'devastating' scam stealing banking and sensitive data
All 1.8 billion Gmail users have been issued a'red alert' over a scam that lets hackers gain access to accounts. The attack uses AI to craft deepfake robocalls and malicious emails capable of bypassing security filters. The combination works to convince victims their Gmail account has been compromised. Users receive a phone call that suspicious activity was detected in their account and are told an email is soon to follow with steps to rectify the issue. The email includes a fake website that looks identical to Google's, which prompts users to enter their login credentials.
5 sneaky ways hackers are utilizing generative AI
Artificial Intelligence (AI) can be a force for good in our future, that much is obvious from the fact that it's being utilized to advance things like medical research. The thought that somewhere out there, there's a James Bond-like villain in an armchair stroking a cat and using generative AI to hack your PC may seem like fantasy but, quite frankly, it's not. Cyber security experts are already scrambling to thwart millions of threats by hackers that have used generative AI to hack PCs, steal money, credentials, and data, and, with the rapid proliferation of new and improved AI tools, it's only going to get worse. The type of cyberattacks hackers are using aren't necessarily new. They're just more prolific, sophisticated, and effective now that they have weaponized AI.
- Asia > Singapore (0.15)
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.54)
The words and phrases you should NEVER Google or your computer could get hacked
Searching on Google might seem like one of the safest things to do online. But cybersecurity experts warn that there are some searches which could put you at serious risk of being hacked. Last week, it was revealed that cybercriminals had hijacked the Google results for'Are Bengal cats legal in Australia?' to infect cat-lovers' computers. Now, experts have revealed the seven other common words and phrases you should never Google. Using a technique called'SEO poisoning' criminals exploit Google's search results to lure unsuspecting victims into websites they control.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.66)
I'm a cybersecurity expert - here are the apps I would NEVER use
Many of the world's most popular apps have dubious terms of service, and exploit private data to make money, according to a cybersecurity expert. He says that by allowing data to be monitored by'big tech' companies, they can decide what we see online, and we become'defined by what computer algorithms decide for us.' Digital voice assistants such as Alexa are serious privacy risks, Gaffney says. The devices listen for'wake words' before operating but are listening to them all the time - and take snippets of your voice and process them in data centers far from your home. Gaffney says, 'I don't use them at all, but for those that do, I would not place them in the bathroom or bedroom. Though they wake on trigger words, they listen for a few seconds afterward.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.66)